Why I think the Foundational Research Institute should rethink its approach

by Mike Johnson

The following is my considered evaluation of the Foundational Research Institute, circa July 2017. I discuss its goal, where I foresee things going wrong with how it defines suffering, and what it could do to avoid these problems.

TL;DR version: functionalism (“consciousness is the sum-total of the functional properties of our brains”) sounds a lot better than it actually turns out to be in practice. In particular, functionalism makes it impossible to define ethics & suffering in a way that can mediate disagreements.

I. What is the Foundational Research Institute?

The Foundational Research Institute (FRI) is a Berlin-based group that “conducts research on how to best reduce the suffering of sentient beings in the near and far future.” Executive Director Max Daniel introduced them at EA Global Boston as “the only EA organization which at an organizational level has the mission of focusing on reducing s-risk.” S-risks are, according to Daniel, “risks where an adverse outcome would bring about suffering on an astronomical scale, vastly exceeding all suffering that has existed on Earth so far.”

Essentially, FRI wants to become the research arm of suffering-focused ethics, and help prevent artificial general intelligence (AGI) failure-modes which might produce suffering on a cosmic scale.

What I like about FRI:

While I have serious qualms about FRI’s research framework, I think the people behind FRI deserve a lot of credit- they seem to be serious people, working hard to build something good. In particular, I want to give them a shoutout for three things:

  • First, FRI takes suffering seriously, and I think that’s important. When times are good, we tend to forget how tongue-chewingly horrific suffering can be. S-risks seem particularly horrifying.
  • Second, FRI isn’t afraid of being weird. FRI has been working on s-risk research for a few years now, and if people are starting to come around to the idea that s-risks are worth thinking about, much of the credit goes to FRI.
  • Third, I have great personal respect for Brian Tomasik, one of FRI’s co-founders. I’ve found him highly thoughtful, generous in debates, and unfailingly principled. In particular, he’s always willing to bite the bullet and work ideas out to their logical end, even if it involves repugnant conclusions.

What is FRI’s research framework?

FRI believes in analytic functionalism, or what David Chalmers calls “Type-A materialism”. Essentially, what this means is there’s no ’theoretical essence’ to consciousness; rather, consciousness is the sum-total of the functional properties of our brains. Since ‘functional properties’ are rather vague, this means consciousness itself is rather vague, in the same way words like “life,” “justice,” and “virtue” are messy and vague.

Brian suggests that this vagueness means there’s an inherently subjective, perhaps arbitrary element to how we define consciousness:

Analytic functionalism looks for functional processes in the brain that roughly capture what we mean by words like “awareness”, “happy”, etc., in a similar way as a biologist may look for precise properties of replicators that roughly capture what we mean by “life”. Just as there can be room for fuzziness about where exactly to draw the boundaries around “life”, different analytic functionalists may have different opinions about where to define the boundaries of “consciousness” and other mental states. This is why consciousness is “up to us to define”. There’s no hard problem of consciousness for the same reason there’s no hard problem of life: consciousness is just a high-level word that we use to refer to lots of detailed processes, and it doesn’t mean anything in addition to those processes.

Finally, Brian argues that the phenomenology of consciousness is identical with the phenomenology of computation:

I know that I’m conscious. I also know, from neuroscience combined with Occam’s razor, that my consciousness consists only of material operations in my brain — probably mostly patterns of neuronal firing that help process inputs, compute intermediate ideas, and produce behavioral outputs. Thus, I can see that consciousness is just the first-person view of certain kinds of computations — as Eliezer Yudkowsky puts it, “How An Algorithm Feels From Inside“. Consciousness is not something separate from or epiphenomenal to these computations. It is these computations, just from their own perspective of trying to think about themselves.

In other words, consciousness is what minds compute. Consciousness is the collection of input operations, intermediate processing, and output behaviors that an entity performs.

And if consciousness is all these things, so too is suffering. Which means suffering is computational, yet also inherently fuzzy, and at least a bit arbitrary; a leaky high-level reification impossible to speak about accurately, since there’s no formal, objective “ground truth”.

II. Why do I worry about FRI’s research framework?

In short, I think FRI has a worthy goal and good people, but its metaphysics actively prevent making progress toward that goal. The following describes why I think that, drawing heavily on Brian’s writings (of FRI’s researchers, Brian seems the most focused on metaphysics):

Note: FRI is not the only EA organization which holds functionalist views on consciousness; much of the following critique would also apply to e.g. MIRI, FHI, and OpenPhil. I focus on FRI because (1) Brian’s writings on consciousness & functionalism have been hugely influential in the community, and are clear enough *to* criticize; (2) the fact that FRI is particularly clear about what it cares about- suffering- allows a particularly clear critique about what problems it will run into with functionalism; (3) I believe FRI is at the forefront of an important cause area which has not crystallized yet, and I think it’s critically important to get these objections bouncing around this subcommunity.

Objection 1: Motte-and-bailey

Brian: “Consciousness is not a thing which exists ‘out there’ or even a separate property of matter; it’s a definitional category into which we classify minds. ‘Is this digital mind really conscious?’ is analogous to ‘Is a rock that people use to eat on really a table?’ [However,] That consciousness is a cluster in thingspace rather than a concrete property of the world does not make reducing suffering less important.”

The FRI model seems to imply that suffering is ineffable enough such that we can’t have an objective definition, yet sufficiently effable that we can coherently talk and care about it. This attempt to have it both ways seems contradictory, or at least in deep tension.

Indeed, I’d argue that the degree to which you can care about something is proportional to the degree to which you can define it objectively. E.g., If I say that “gnireffus” is literally the most terrible thing in the cosmos, that we should spread gnireffus-focused ethics, and that minimizing g-risks (far-future scenarios which involve large amounts of gnireffus) is a moral imperative, but also that what is and what and isn’t gnireffus is rather subjective with no privileged definition, and it’s impossible to objectively tell if a physical system exhibits gnireffus, you might raise any number of objections. This is not an exact metaphor for FRI’s position, but I worry that FRI’s work leans on the intuition that suffering is real and we can speak coherently about it, to a degree greater than its metaphysics formally allow.

Max Daniel (personal communication) suggests that we’re comfortable with a degree of ineffability in other contexts; “Brian claims that the concept of suffering shares the allegedly problematic properties with the concept of a table. But it seems a stretch to say that the alleged tension is problematic when talking about tables. So why would it be problematic when talking about suffering?” However, if we take the anti-realist view that suffering is ‘merely’ a node in the network of language, we have to live with the consequences of this: that ‘suffering’ will lose meaning as we take it away from the network in which it’s embedded (Wittgenstein). But FRI wants to do exactly this, to speak about suffering in the context of AGIs, simulated brains, even video game characters.

We can be anti-realists about suffering (suffering-is-a-node-in-the-network-of-language), or we can argue that we can talk coherently about suffering in novel contexts (AGIs, mind crime, aliens, and so on), but it seems inherently troublesome to claim we can do both at the same time.

Objection 2: Intuition duels

Two people can agree on FRI’s position that there is no objective fact of the matter about what suffering is (no privileged definition), but this also means they have no way of coming to any consensus on the object-level question of whether something can suffer. This isn’t just an academic point: Brian has written extensively about how he believes non-human animals can and do suffer extensively, whereas Yudkowsky (who holds computationalist views, like Brian) has written about how he’s confident that animals are not conscious and cannot suffer, due to their lack of higher-order reasoning.

And if functionalism is having trouble adjudicating the easy cases of suffering–whether monkeys can suffer, or whether dogs can— it doesn’t have a sliver of a chance at dealing with the upcoming hard cases of suffering: whether a given AGI is suffering, or engaging in mind crime; whether a whole-brain emulation (WBE) or synthetic organism or emergent intelligence that doesn’t have the capacity to tell us how it feels (or that we don’t have the capacity to understand) is suffering; if any aliens that we meet in the future can suffer; whether changing the internal architecture of our qualia reports means we’re also changing our qualia; and so on.

In short, FRI’s theory of consciousness isn’t actually a theory of consciousness at all, since it doesn’t do the thing we need a theory of consciousness to do: adjudicate disagreements in a principled way. Instead, it gives up any claim on the sorts of objective facts which could in principle adjudicate disagreements.

This is a source of friction in EA today, but it’s mitigated by the sense that

(1) The EA pie is growing, so it’s better to ignore disagreements than pick fights;

(2) Disagreements over the definition of suffering don’t really matter yet, since we haven’t gotten into the business of making morally-relevant synthetic beings (that we know of) that might be unable to vocalize their suffering.

If the perception of one or both of these conditions change, the lack of some disagreement-adjudicating theory of suffering will matter quite a lot.

Objection 3: Convergence requires common truth

Mike: “[W]hat makes one definition of consciousness better than another? How should we evaluate them?”

Brian: “Consilience among our feelings of empathy, principles of non-discrimination, understandings of cognitive science, etc. It’s similar to the question of what makes one definition of justice or virtue better than another.”

Brian is hoping that affective neuroscience will slowly converge to accurate views on suffering as more and better data about sentience and pain accumulates. But convergence to truth implies something (objective) driving the convergence- in this way, Brian’s framework still seems to require an objective truth of the matter, even though he disclaims most of the benefits of assuming this.

Objection 4: Assuming that consciousness is a reification produces more confusion, not less

Brian: “Consciousness is not a reified thing; it’s not a physical property of the universe that just exists intrinsically. Rather, instances of consciousness are algorithms that are implemented in specific steps. … Consciousness involves specific things that brains do.”

Brian argues that we treat conscious/phenomenology as more ‘real’ than it is. Traditionally, whenever we’ve discovered something is a leaky reification and shouldn’t be treated as ‘too real’, we’ve been able to break it down into more coherent constituent pieces we can treat as real. Life, for instance, wasn’t due to élan vital but a bundle of self-organizing properties & dynamics which generally co-occur. But carrying out this “de-reification” process on consciousness– enumerating its coherent constituent pieces– has proven difficult, especially if we want to preserve some way to speak cogently about suffering.

Speaking for myself, the more I stared into the depths of functionalism, the less certain everything about moral value became– and arguably, I see the same trajectory in Brian’s work and Luke Muehlhauser’s report. Their model uncertainty has seemingly become larger as they’ve looked into techniques for how to “de-reify” consciousness while preserving some flavor of moral value, not smaller. Brian and Luke seem to interpret this as evidence that moral value is intractably complicated, but this is also consistent with consciousness not being a reification, and instead being a real thing. Trying to “de-reify” something that’s not a reification will produce deep confusion, just as surely trying to treat a reification as ‘more real’ than it actually is will.

Edsger W. Dijkstra famously noted that “The purpose of abstraction is not to be vague, but to create a new semantic level in which one can be absolutely precise.” And so if our ways of talking about moral value fail to ‘carve reality at the joints’- then by all means let’s build better ones, rather than giving up on precision.

Objection 5: The Hard Problem of Consciousness is a red herring

Brian spends a lot of time discussing Chalmers’ “Hard Problem of Consciousness”, i.e. the question of why we’re subjectively conscious, and seems to base at least part of his conclusion on not finding this question compelling— he suggests “There’s no hard problem of consciousness for the same reason there’s no hard problem of life: consciousness is just a high-level word that we use to refer to lots of detailed processes, and it doesn’t mean anything in addition to those processes.” I.e., no ‘why’ is necessary; when we take consciousness and subtract out the details of the brain, we’re left with an empty set.

But I think the “Hard Problem” isn’t helpful as a contrastive centerpiece, since it’s unclear what the problem is, and whether it’s analytic or empirical, a statement about cognition or about physics. At the Qualia Research Institute (QRI), we don’t talk much about the Hard Problem; instead, we talk about Qualia Formalism, or the idea that any phenomenological state can be crisply and precisely represented by some mathematical object. I suspect this would be a better foil for Brian’s work than the Hard Problem.

Objection 6: Mapping to reality

Brian argues that consciousness should be defined at the functional/computational level: given a Turing machine, or neural network, the right ‘code’ will produce consciousness. But the problem is that this doesn’t lead to a theory which can ‘compile’ to physics. Consider the following:

Imagine you have a bag of popcorn. Now shake it. There will exist a certain ad-hoc interpretation of bag-of-popcorn-as-computational-system where you just simulated someone getting tortured, and other interpretations that don’t imply that. Did you torture anyone? If you’re a computationalist, no clear answer exists- you both did, and did not, torture someone. This sounds like a ridiculous edge-case that would never come up in real life, but in reality it comes up all the time, since there is no principled way to *objectively derive* what computation(s) any physical system is performing.

I don’t think this is an outlandish view of functionalism; Brian suggests much the same in How to Interpret a Physical System as a Mind“Physicalist views that directly map from physics to moral value are relatively simple to understand. Functionalism is more complex, because it maps from physics to computations to moral value. Moreover, while physics is real and objective, computations are fictional and ‘observer-relative’ (to use John Searle’s terminology). There’s no objective meaning to ‘the computation that this physical system is implementing’ (unless you’re referring to the specific equations of physics that the system is playing out).”

Gordon McCabe (McCabe 2004) provides a more formal argument to this effect— that precisely mapping between physical processes and (Turing-level) computational processes is inherently impossible— in the context of simulations. First, McCabe notes that:

[T]here is a one-[to-]many correspondence between the logical states [of a computer] and the exact electronic states of computer memory. Although there are bijective mappings between numbers and the logical states of computer memory, there are no bijective mappings between numbers and the exact electronic states of memory.

This lack of an exact bijective mapping means that subjective interpretation necessarily creeps in, and so a computational simulation of a physical system can’t be ‘about’ that system in any rigorous way:

In a computer simulation, the values of the physical quantities possessed by the simulated system are represented by the combined states of multiple bits in computer memory. However, the combined states of multiple bits in computer memory only represent numbers because they are deemed to do so under a numeric interpretation. There are many different interpretations of the combined states of multiple bits in computer memory. If the numbers represented by a digital computer are interpretation-dependent, they cannot be objective physical properties. Hence, there can be no objective relationship between the changing pattern of multiple bit-states in computer memory, and the changing pattern of quantity-values of a simulated physical system.

McCabe concludes that, metaphysically speaking,

A digital computer simulation of a physical system cannot exist as, (does not possess the properties and relationships of), anything else other than a physical process occurring upon the components of a computer. In the contemporary case of an electronic digital computer, a simulation cannot exist as anything else other than an electronic physical process occurring upon the components and circuitry of a computer.

Where does this leave ethics? In Flavors of Computation Are Flavors of Consciousness, Brian notes that “In some sense all I’ve proposed here is to think of different flavors of computation as being various flavors of consciousness. But this still leaves the question: Which flavors of computation matter most? Clearly whatever computations happen when a person is in pain are vastly more important than what’s happening in a brain on a lazy afternoon. How can we capture that difference?”

But if Brian grants the former point- that “There’s no objective meaning to ‘the computation that this physical system is implementing’”– then this latter task of figuring out “which flavors of computation matter most” is provably impossible. There will always be multiple computational (and thus ethical) interpretations of a physical system, with no way to figure out what’s “really” happening. No way to figure out if something is suffering or not. No consilience; not now, not ever.

Note: despite apparently granting the point above, Brian also remarks that:

I should add a note on terminology: All computations occur within physics, so any computation is a physical process. Conversely, any physical process proceeds from input conditions to output conditions in a regular manner and so is a computation. Hence, the set of computations equals the set of physical processes, and where I say “computations” in this piece, one could just as well substitute “physical processes” instead.

This seems to be (1) incorrect, for the reasons I give above, or (2) taking substantial poetic license with these terms, or (3) referring to hypercomputation (which might be able to salvage the metaphor, but would invalidate many of FRI’s conclusions dealing with the computability of suffering on conventional hardware).

This objection may seem esoteric or pedantic, but I think it’s important, and that it ripples through FRI’s theoretical framework with disastrous effects.


Objection 7: FRI doesn’t fully bite the bullet on computationalism

Brian suggests that “flavors of computation are flavors of consciousness” and that some computations ‘code’ for suffering. But if we do in fact bite the bullet on this metaphor and place suffering within the realm of computational theory, we need to think in “near mode” and accept all the paradoxes that brings. Scott Aaronson, a noted expert on quantum computing, raises the following objections to functionalism:

I’m guessing that many people in this room side with Dennett, and (not coincidentally, I’d say) also with Everett. I certainly have sympathies in that direction too. In fact, I spent seven or eight years of my life as a Dennett/Everett hardcore believer. But, while I don’t want to talk anyone out of the Dennett/Everett view, I’d like to take you on a tour of what I see as some of the extremely interesting questions that that view leaves unanswered. I’m not talking about “deep questions of meaning,” but about something much more straightforward: what exactly does a computational process have to do to qualify as “conscious”?

There’s this old chestnut, what if each person on earth simulated one neuron of your brain, by passing pieces of paper around. It took them several years just to simulate a single second of your thought processes. Would that bring your subjectivity into being? Would you accept it as a replacement for your current body? If so, then what if your brain were simulated, not neuron-by-neuron, but by a gigantic lookup table? That is, what if there were a huge database, much larger than the observable universe (but let’s not worry about that), that hardwired what your brain’s response was to every sequence of stimuli that your sense-organs could possibly receive. Would that bring about your consciousness? Let’s keep pushing: if it would, would it make a difference if anyone actually consulted the lookup table? Why can’t it bring about your consciousness just by sitting there doing nothing?

To these standard thought experiments, we can add more. Let’s suppose that, purely for error-correction purposes, the computer that’s simulating your brain runs the code three times, and takes the majority vote of the outcomes. Would that bring three “copies” of your consciousness into being? Does it make a difference if the three copies are widely separated in space or time—say, on different planets, or in different centuries? Is it possible that the massive redundancy taking place in your brain right now is bringing multiple copies of you into being?

Maybe my favorite thought experiment along these lines was invented by my former student Andy Drucker.  In the past five years, there’s been a revolution in theoretical cryptography, around something called Fully Homomorphic Encryption (FHE), which was first discovered by Craig Gentry.  What FHE lets you do is to perform arbitrary computations on encrypted data, without ever decrypting the data at any point.  So, to someone with the decryption key, you could be proving theorems, simulating planetary motions, etc.  But to someone without the key, it looks for all the world like you’re just shuffling random strings and producing other random strings as output.

You can probably see where this is going.  What if we homomorphically encrypted a simulation of your brain?  And what if we hid the only copy of the decryption key, let’s say in another galaxy?  Would this computation—which looks to anyone in our galaxy like a reshuffling of gobbledygook—be silently producing your consciousness?

When we consider the possibility of a conscious quantum computer, in some sense we inherit all the previous puzzles about conscious classical computers, but then also add a few new ones.  So, let’s say I run a quantum subroutine that simulates your brain, by applying some unitary transformation U.  But then, of course, I want to “uncompute” to get rid of garbage (and thereby enable interference between different branches), so I apply U-1.  Question: when I apply U-1, does your simulated brain experience the same thoughts and feelings a second time?  Is the second experience “the same as” the first, or does it differ somehow, by virtue of being reversed in time? Or, since U-1U is just a convoluted implementation of the identity function, are there no experiences at all here?

Here’s a better one: many of you have heard of the Vaidman bomb.  This is a famous thought experiment in quantum mechanics where there’s a package, and we’d like to “query” it to find out whether it contains a bomb—but if we query it and there is a bomb, it will explode, killing everyone in the room.  What’s the solution?  Well, suppose we could go into a superposition of querying the bomb and not querying it, with only ε amplitude on querying the bomb, and √(1-ε2) amplitude on not querying it.  And suppose we repeat this over and over—each time, moving ε amplitude onto the “query the bomb” state if there’s no bomb there, but moving ε2 probability onto the “query the bomb” state if there is a bomb (since the explosion decoheres the superposition).  Then after 1/ε repetitions, we’ll have order 1 probability of being in the “query the bomb” state if there’s no bomb.  By contrast, if there is a bomb, then the total probability we’ve ever entered that state is (1/ε)×ε2 = ε.  So, either way, we learn whether there’s a bomb, and the probability that we set the bomb off can be made arbitrarily small.  (Incidentally, this is extremely closely related to how Grover’s algorithm works.)

OK, now how about the Vaidman brain?  We’ve got a quantum subroutine simulating your brain, and we want to ask it a yes-or-no question.  We do so by querying that subroutine with ε amplitude 1/ε times, in such a way that if your answer is “yes,” then we’ve only ever activated the subroutine with total probability ε.  Yet you still manage to communicate your “yes” answer to the outside world.  So, should we say that you were conscious only in the ε fraction of the wavefunction where the simulation happened, or that the entire system was conscious?  (The answer could matter a lot for anthropic purposes.)

To sum up: Brian’s notion that consciousness is the same as computation raises more issues than it solves; in particular, the possibility that if suffering is computable, it may also be uncomputable/reversible, would suggest s-risks aren’t as serious as FRI treats them.

Objection 8: Dangerous combination

Three themes which seem to permeate FRI’s research are:

(1) Suffering is the thing that is bad.

(2) It’s critically important to eliminate badness from the universe.

(3) Suffering is impossible to define objectively, and so we each must define what suffering means for ourselves.

Taken individually, each of these seems reasonable. Pick two, and you’re still okay. Pick all three, though, and you get A Fully General Justification For Anything, based on what is ultimately a subjective/aesthetic call.

Much can be said in FRI’s defense here, and it’s unfair to single them out as risky: in my experience they’ve always brought a very thoughtful, measured, cooperative approach to the table. I would just note that ideas are powerful, and I think theme (3) is especially pernicious if incorrect.

III. QRI’s alternative

Analytic functionalism is essentially a negative hypothesis about consciousness: it’s the argument that there’s no order to be found, no rigor to be had. It obscures this with talk of “function”, which is a red herring it not only doesn’t define, but admits is undefinable. It doesn’t make any positive assertion. Functionalism is skepticism- nothing more, nothing less.

But is it right?

Ultimately, I think these a priori arguments are much like people in the middle ages arguing whether one could ever formalize a Proper System of Alchemy. Such arguments may in many cases hold water, but it’s often difficult to tell good arguments apart from arguments where we’re just cleverly fooling ourselves. In retrospect, the best way to *prove* systematized alchemy was possible was to just go out and *do* it, and invent Chemistry. That’s how I see what we’re doing at QRI with Qualia Formalism: we’re assuming it’s possible to build stuff, and we’re working on building the object-level stuff.

What we’ve built with QRI’s framework

Note: this is a brief, surface-level tour of our research; it will probably be confusing for readers who haven’t dug into our stuff before. Consider this a down-payment on a more substantial introduction.

My most notable work is Principia Qualia, in which I lay out my meta-framework for consciousness (a flavor of dual-aspect monism, with a focus on Qualia Formalism) and put forth the Symmetry Theory of Valence (STV). Essentially, the STV is an argument that much of the apparent complexity of emotional valence is evolutionarily contingent, and if we consider a mathematical object isomorphic to a phenomenological experience, the mathematical property which corresponds to how pleasant it is to be that experience is the object’s symmetry. This implies a bunch of testable predictions and reinterpretations of things like what ‘pleasure centers’ do (Section XI; Section XII). Building on this, I offer the Symmetry Theory of Homeostatic Regulation, which suggests understanding the structure of qualia will translate into knowledge about the structure of human intelligence, and I briefly touch on the idea of Neuroacoustics.

Likewise, my colleague Andrés Gómez Emilsson has written about the likely mathematics of phenomenology, includingThe Hyperbolic Geometry of DMT Experiences, Tyranny of the Intentional Object, and Algorithmic Reduction of Psychedelic States. If I had to suggest one thing to read in all of these links, though, it would be the transcript of his recent talk on Quantifying Bliss, which lays out the world’s first method to objectively measure valence from first principles (via fMRI) using Selen Atasoy’s Connectome Harmonics framework, the Symmetry Theory of Valence, and Andrés’s CDNS model of experience.

These are risky predictions and we don’t yet know if they’re right, but we’re confident that if there is some elegant structure intrinsic to consciousness, as there is in many other parts of the natural world, these are the right kind of risks to take.

I mention all this because I think analytic functionalism- which is to say radical skepticism/eliminativism, the metaphysics of last resort- only looks as good as it does because nobody’s been building out any alternatives.

IV. Closing thoughts

FRI is pursuing a certain research agenda, and QRI is pursuing another, and there’s lots of value in independent explorations of the nature of suffering. I’m glad FRI exists, everybody I’ve interacted with at FRI has been great, I’m happy they’re focusing on s-risks, and I look forward to seeing what they produce in the future.

On the other hand, I worry that nobody’s pushing back on FRI’s metaphysics, which seem to unavoidably lead to the intractable problems I describe above. FRI seems to believe these problems are part of the territory, unavoidable messes that we just have to make philosophical peace with. But I think that functionalism is a bad map, that the metaphysical messes it leads to are much worse than most people realize (fatal to FRI’s mission), and there are other options that avoid these problems (which, to be fair, is not to say they have no problems).

Ultimately, FRI doesn’t owe me a defense of their position. But if they’re open to suggestions on what it would take to convince a skeptic like me that their brand of functionalism is viable, or at least rescuable, I’d offer the following:

Re: Objection 1 (motte-and-bailey), I suggest FRI should be as clear and complete as possible in their basic definition of suffering. In which particular ways is it ineffable/fuzzy, and in which particular ways is it precise? What can we definitely say about suffering, and what can we definitely never determine? Preregistering ontological commitments and methodological possibilities would help guard against FRI’s definition of suffering changing based on context.

Re: Objection 2 (intuition duels), FRI may want to internally “war game” various future scenarios involving AGI, WBE, etc, with one side arguing that a given synthetic (or even extraterrestrial) organism is suffering, and the other side arguing that it isn’t. I’d expect this would help diagnose what sorts of disagreements future theories of suffering will need to adjudicate, and perhaps illuminate implicit ethical intuitions. Sharing the results of these simulated disagreements would also be helpful in making FRI’s reasoning less opaque to outsiders, although making everything transparent could lead to certain strategic disadvantages.

Re: Objection 3 (convergence requires common truth), I’d like FRI to explore exactly what might drive consilience/convergence in theories of suffering, and what precisely makes one theory of suffering better than another, and ideally to evaluate a range of example theories of suffering under these criteria.

Re: Objection 4 (assuming that consciousness is a reification produces more confusion, not less), I would love to see a historical treatment of reification: lists of reifications which were later dissolved (e.g., élan vital), vs scattered phenomena that were later unified (e.g., electromagnetism). What patterns do the former have, vs the latter, and why might consciousness fit one of these buckets better than the other?

Re: Objection 5 (the Hard Problem of Consciousness is a red herring), I’d like to see a more detailed treatment of what kinds of problem people have interpreted the Hard Problem as, and also more analysis on the prospects of Qualia Formalism (which I think is the maximally-empirical, maximally-charitable interpretation of the Hard Problem). It would be helpful for us, in particular, if FRI preregistered their expectations about QRI’s predictions, and their view of the relative evidence strength of each of our predictions.

Re: Objection 6 (mapping to reality), this is perhaps the heart of most of our disagreement. From Brian’s quotes, he seems split on this issue; I’d like clarification about whether he believes we can ever precisely/objectively map specific computations to specific physical systems, and vice-versa. And if so— how? If not, this seems to propagate through FRI’s ethical framework in a disastrous way, since anyone can argue that any physical system does, or does not, ‘code’ for massive suffering, and there’s no principled way to derive any ‘ground truth’ or even pick between interpretations in a principled way (e.g. my popcorn example). If this isn’t the case— why not?

Brian has suggested that “certain high-level interpretations of physical systems are more ‘natural’ and useful than others” (personal communication); I agree, and would encourage FRI to explore systematizing this.

It would be non-trivial to port FRI’s theories and computational intuitions to the framework of “hypercomputation”– i.e., the understanding that there’s a formal hierarchy of computational systems, and that Turing machines are only one level of many– but it may have benefits too. Namely, it might be the only way they could avoid Objection 6 (which I think is a fatal objection) while still allowing them to speak about computation & consciousness in the same breath. I think FRI should look at this and see if it makes sense to them.

Re: Objection 7 (FRI doesn’t fully bite the bullet on computationalism), I’d like to see responses to Aaronson’s aforementioned thought experiments.

Re: Objection 8 (dangerous combination), I’d like to see a clarification about why my interpretation is unreasonable (as it very well may be!).


In conclusion- I think FRI has a critically important goal- reduction of suffering & s-risk. However, I also think FRI has painted itself into a corner by explicitly disallowing a clear, disagreement-mediating definition for what these things are. I look forward to further work in this field.


Mike Johnson

Qualia Research Institute

Acknowledgements: thanks to Andrés Gómez Emilsson, Brian Tomasik, and Max Daniel for reviewing earlier drafts of this.


My sources for FRI’s views on consciousness:

Flavors of Computation are Flavors of Consciousness:



Is There a Hard Problem of Consciousness?


Consciousness Is a Process, Not a Moment



How to Interpret a Physical System as a Mind



Dissolving Confusion about Consciousness



Debate between Brian & Mike on consciousness:


Max Daniel’s EA Global Boston 2017 talk on s-risks:
Multipolar debate between Eliezer Yudkowsky and various rationalists about animal suffering:
The Internet Encyclopedia of Philosophy on functionalism:
Gordon McCabe on why computation doesn’t map to physics:
Toby Ord on hypercomputation, and how it differs from Turing’s work:
Luke Muehlhauser’s OpenPhil-funded report on consciousness and moral patienthood:
Scott Aaronson’s thought experiments on computationalism:
Selen Atasoy on Connectome Harmonics, a new way to understand brain activity:
My work on formalizing phenomenology:

My meta-framework for consciousness, including the Symmetry Theory of Valence:



My hypothesis of homeostatic regulation, which touches on why we seek out pleasure:



My exploration & parametrization of the ‘neuroacoustics’ metaphor suggested by Atasoy’s work:


My colleague Andrés’s work on formalizing phenomenology:
A model of DMT-trip-as-hyperbolic-experience:
June 2017 talk at Consciousness Hacking, describing a theory and experiment to predict people’s valence from fMRI data:
A parametrization of various psychedelic states as operators in qualia space:
A brief post on valence and the fundamental attribution error:
A summary of some of Selen Atasoy’s current work on Connectome Harmonics:

Connectome-Specific Harmonic Waves on LSD

The harmonics-in-connectome approach to modeling brain activity is a fascinating paradigm. I am privileged to have been at this talk in the 2017 Psychedelic Science conference. I’m extremely happy find out that MAPS already uploaded the talks. Dive in!

Below is a partial transcript of the talk. I figured that I should get it in written form in order to be able to reference it in future articles. Enjoy!

[After a brief introduction about harmonic waves in many different kinds of systems… at 7:04, Selen Atasoy]:

We applied the [principle of harmonic decomposition] to the anatomy of the brain. We made them connectome-specific. So first of all, what do I mean by the human connectome? Today thanks to the recent developments in structural neuroimaging techniques such as diffusion tensor imaging, we can trace the long-distance white matter connections in the brain. These long-distance white matter fibers (as you see in the image) connect distant parts of the brain, distant parts of the cortex. And the set of all of the different connections is called the connectome.


Now, because we know the equation governing these harmonic waves, we can extend this principle to the human brain by simply solving the same equation on the human connectome instead of a metal plate (Chladni plates) or the anatomy of the zebra. And if you do that, we get a set of harmonic patterns, this time emerging in the cortex. And we decided to call these harmonic patterns connectome harmincs. And each of these connectome harmonic patterns are associated with a different frequency. And because they correspond to different frequencies they are all independent, and together they give you a new language, so to speak, to describe neural activity. So in the same way the harmonic patterns are building blocks of these complex patterns we see on animal coats, these connectome harmonics are the building blocks of the complex spatio-temporal patterns of neural activity.

Describing and explaining neural activity by using these connectome harmonics as brain states is really not very different than decomposing a complex musical pieces into its musical notes. It’s simply a new way of representing your data, or a new language to express it.

What is the advantage of using this new language? So why not use the state-of-the-art conventional neurimaging analysis methods? Because these connectome harmonics, by definition are the vibration modes, but applied to the anatomy of the human brain, and if you use them as brain states to express neural activity we can compute certain fundamental principles very easily such as the energy or the power.

The power would be the strength of activation of each of these states in neural activity. So how strongly that particular state contributes to neural activity. And the energy would be a combination of this strength of activation with the intrinsic energy of that particular brain state, and the intrinsic energy comes from the frequency of its vibration (in the analogy of vibration).

So in this study we looked at the power and the energy of these connectome harmonic brain states in order to explore the neural correlates of the LSD experience.

We looked at 12 healthy participants who received either 75µg of LSD (IV) or a placebo, over two sessions. These two sessions were 14 days apart in counter-balanced order. And the fMRI scans consisted of 3 eyes-closed resting states scans, each lasting 7 minutes, in the first and the third scan the participants were simply resting, eyes closed, but in the second scan they were also listening to music. And after each scan, the participants rated the intensity of certain experiences.


So if you look at, firstly, at the total power and the total energy of each of these scans under LSD and placebo, what we see is that under LSD both the power as well as the energy of brain activity increases significantly.

And if we compute the probability of observing a certain energy value on LSD or placebo, what we see is that the peak of this probability distribution clearly shoots towards high energy values under LSD.


And that peak is even slightly higher in terms of probability when the subjects were listening to music. So if we interpret that peak as, in a way, the characteristic energy of a state, you can see that it shifts towards higher energy under LSD, and that this effect is intensified when listening to music.

And then we asked, which of these brain states, which of these frequencies, were actually contributing to this energy increase. So we partitioned the spectrum of all of these harmonic brain states into different parts and computed the energy of each of these partitions individually. So in total we have around 20,000 brain states. And if you look at the energy differences in LSD and placebo, what we find is that for a very narrow range of low frequencies actually these brain states were decreasing their energy on LSD. But for a very broad range of high frequencies, LSD was inducing an energy increase. So this says that LSD alters brain dynamics in a very frequency-selective manner. And it was causing high frequencies to increase their energy.

So next we looked at whether these changes we are observing in brain activity are correlated with any of the experiences that the participants themselves were having in that moment. If you look at the energy changes within the narrow range of low frequencies, we found that the energy changes in that range significantly correlated with the intensity of the experience of ego dissolution. The loss of subjective self.


And very interestingly, the same range of energy change within the same frequency range also significantly correlated with the intensity of emotional arousal, whether the experience was positive or negative. This could be quite relevant for studies looking into potential therapeutic applications of LSD.


Next, when we look at a slightly higher range of frequencies, what we found was that the energy changes within that range significantly correlated with the positive mood.


In brief, this suggests that it’s rather the low frequency brain states which correlated with ego dissolution or with emotional arousal, and it’s the activity of higher frequencies that is correlated with the positive experiences.

Next, we wanted to check the size of the repertoire of active brain states. And if you look at the probability of activation for any brain state (so we are not distinguishing for any frequency brain states), what we observe is that the probability of a brain state being silent (zero contribution), actually decreased under LSD. And the probability of a brain state contributing very strongly, which corresponds to the tails of these distributions, were increased under LSD. So this suggests that LSD was activating more brain states simultaneously.


And if we go back to the music analogy that we used in the beginning, that would correspond to playing more musical notes at the same time. And it’s very interesting, because studies that have looked at improvising, those who have looked at jazz improvisation, show that improvising jazz musicians play significantly more musical notes compared to memorized play. And this is what we seem to be finding under the effect of LSD. That your brain is actually activating more of these brain states simultaneously.


And it does so in a very non-random fashion. So if you look at the correlation across different frequencies. Like at the co-activation patterns, and their activation over time. You may interpret it as the “communication across various frequencies”. What we found is that for a very broad range of the spectrum, there was a higher correlation across different frequencies in their activation patterns under LSD compared to placebo.

So this really says that LSD is actually causing a reorganization, rather than a random activation of brain states. It’s expanding the repertoire of active brain states, while maintaining -or maybe better said- recreating a complex but spontaneous order. And in the musical analogy it’s really very similar to jazz improvisation, to think about it in an intuitive way.

Now, there is actually one particular situation when dynamical systems such as the brain, and systems that change their activity over time, show this type of emergence of complex order, or enhanced improvisation, enhanced repertoire of active states. And this is when they approach what is called criticality. Now, criticality is this special type of behavior, special type of dynamics, that emerges right at the transition between order and chaos. When these two (extreme) types of dynamics are in balance. And criticality is said to be “the constantly shifting battle zone between stagnation and anarchy. The one place where a complex system can be spontaneous, adaptive, and alive” (Waldrop 1992). So if a system is approaching criticality, there are very characteristic signatures that you would observed in the data, in the relationships that you plot in your data.

And one of them is -and probably the most characteristic of them- is the emergence of power laws. So what does that mean? If you plot one observable in our data, which for example, in our case would be the maximum power of a brain state, in relationship to another observable, for example, the wavenumber, or the frequency of that brain state, and you plot them in logarithmic coordinates, that would mean that if they follow power laws, they would approximate a line. And this is exactly what we observe in our data, and surprisingly for both LSD as well as for placebo, but with one very significant and remarkable difference: because the high frequencies increase their power on LSD, this distribution follows this power law, this line, way more accurately under LSD compared to placebo. And here you see the error of the fit, which is decreasing.

This suggests that LSD shoots brain dynamics further towards criticality.  The signature of criticality that we find in LSD and in placebo is way more enhanced, way more pronounced, under the effect of LSD. And we found the same effect, not only for the maximum power, but also for the mean power, as well as for the power of fluctuations.


So this suggests that the criticality actually may be the principle that is underlying this emergence of complex order, and this reorganization of brain dynamics, and which leads to enhanced improvisation in brain activity.

So, to summarize briefly, what we found was that LSD increases the total power as well as total energy of brain activity. It selectively activates high frequency brain states, and it expands the repertoire or active brain states in a very non-random fashion. And the principle underlying all of these changes seems to be a reorganization of brain dynamics, right at criticality, right at the edge of chaos, or just as the balance between order and chaos. And very interestingly, the “edge of chaos”, or the edge of criticality, is said to be where “life has enough stability to sustain itself, and enough creativity to deserve the name of life” (Waldrop 1992). So I leave you with that, and thank you for your attention.

[Applauses; ends at 22:00, followed by Q&A]

ELI5 “The Hyperbolic Geometry of DMT Experiences”


I wrote the following in response to a comment on the r/RationalPsychonaut subreddit about this DMT article I wrote some time ago. The comment in question was: “Can somebody eli5 [explain like I am 5 years old] this for me?” So here is my attempt (more like “eli12”, but anyways):

In order to explain the core idea of the article I need to convey the main takeaways of the following four things:

  1. Differential geometry,
  2. How it relates to symmetry,
  3. How it applies to experience, and
  4. How the effects of DMT turn out to be explained (in part) by changes in the curvature of one’s experience of space (what we call “phenomenal space”).

1) Differential Geometry

If you are an ant on a ball, it may seem like you live on a “flat surface”. However, let’s imagine you do the following: You advance one centimeter in one direction, you turn 90 degrees and walk another centimeter, turn 90 degrees again and advance yet another centimeter. Logically, you just “traced three edges of a square” so you cannot be in the same place from which you departed. But let’s says that you somehow do happen to arrive at the same place. What happened? Well, it turns out the world in which you are walking is not quite flat! It’s very flat from your point of view, but overall it is a sphere! So you ARE able to walk along a triangle that happens to have three 90 degree corners.

That’s what we call a “positively curved space”. There the angles of triangles add up to more than 180 degrees. In flat spaces they add up to 180. And in “negatively curved spaces” (i.e. “negative Gaussian curvature” as talked about in the article) they add up to less than 180 degrees.


Eight 90-degree triangles on the surface of a sphere

So let’s go back to the ant again. Now imagine that you are walking on some surface that, again, looks flat from your restricted point of view. You walk one centimeter, then turn 90 degrees, then walk another, turn 90 degrees, etc. for a total of, say, 5 times. And somehow you arrive at the same point! So now you traced a pentagon with 90 degree corners. How is that possible? The answer is that you are now in a “negatively curved space”, a kind of surface that in mathematics is called “hyperbolic”. Of course it sounds impossible that this could happen in real life. But the truth is that there are many hyperbolic surfaces that you can encounter in your daily life. Just to give an example, kale is a highly hyperbolic 2D surface (“H2” for short). It’s crumbly and very curved. So an ant might actually be able to walk along a regular pentagon with 90-degree corners if it’s walking on kale (cf. Too Many Triangles).


An ant walking on kale may infer that the world is an H2 space.

In brief, hyperbolic geometry is the study of spaces that have this quality of negative curvature. Now, how is this related to symmetry?

2) How it relates to symmetry

As mentioned, on the surface of a sphere you can find triangles with 90 degree corners. In fact, you can partition the surface of a sphere into 8 regular triangles, each with 90 degree corners. Now, there are also other ways of partitioning the surface of a sphere with regular shapes (“regular” in the sense that every edge has the same length, and every corner has the same angle). But the number of ways to do it is not infinite. After all, there’s only a handful of regular polyhedra (which, when “inflated”, are equivalent to the ways of partitioning the surface of a sphere in regular ways).


If you instead want to partition a plane in a regular way with geometric shapes, you don’t have many options. You can partition it using triangles, squares, and hexagons. And in all of those cases, the angles on each of the vertices will add up to 360 degrees (e.g. six triangles, four squares, or thee corners of hexagons meeting at a point). I won’t get into Wallpaper groups, but suffice it to say that there are also a limited number of ways of breaking down a flat surface using symmetry elements (such as reflections, rotations, etc.).


Regular tilings of 2D flat space

Hyperbolic 2D surfaces can be partitioned in regular ways in an infinite number of ways! This is because we no longer have the constraints imposed by flat (or spherical) geometries where the angles of shapes must add up to a certain number of degrees. As mentioned, in hyperbolic surfaces the corners of triangles add up to less than 180 degrees, so you can fit more than 6 corners of equilateral triangles at one point (and depending on the curvature of the space, you can fit up to an infinite number of them). Likewise, you can tessellate the entire hyperbolic plane with heptagons.


Hyperbolic tiling: Each of the heptagons is just as big (i.e. this is a projection of the real thing)

On the flip side, if you see a regular partitioning of a surface, you can infer what its curvature is! For example, if you see that a surface is entirely covered with heptagons, three on each of the corners, you can be sure that you are seeing a hyperbolic surface. And if you see a surface covered with triangles such that there are only four triangles on each joint, then you know you are seeing a spherical surface. So if you train yourself to notice and count these properties in regular patterns, you will indirectly also be able to determine whether the patterns inhabit a spherical, flat, or hyperbolic space!

3) How it applies to experience

How does this apply to experience? Well, in sober states of consciousness one is usually restricted to seeing and imagining spherical and flat surfaces (and their corresponding symmetric partitions). One can of course look at a piece of kale and think “wow, that’s a hyperbolic surface” but what is impossible to do is to see it “as if it were flat”. One can only see hyperbolic surfaces as projections (i.e. where we make regular shapes look irregular so that they can fit on a flat surface) or we end up contorting the surface in a crumbly fashion in order to fit it in our flat experiential space. (Note: even sober phenomenal space happens to be based on projective geometry; but let’s not go there for now.)

4) DMT: Hyperbolizing Phenomenal Space

In psychedelic states it is common to experience whatever one looks at (or, with more stunning effects, whatever one hallucinates in a sensorially-deprived environment such as a flotation tank) as slowly becoming more and more symmetric. Symmetrical patterns are attractors in psychedelia. It’s common for people to describe their acid experiences as “a kaleidoscope of colors and meaning”. We should not be too quick to dismiss these descriptions as purely metaphorical. As you can see from the article Algorithmic Reduction of Psychedelic States as well as PsychonautWiki’s Symmetrical Texture Repetition, LSD and other psychedelics do in fact “symmetrify” the textures you experience!


What gravel might look like on 150 mics of LSD (Source)

As it turns out, this symmetrification process (what we call “lowering the symmetry detection/propagation threshold”) does allow one to experience any of the possible ways of breaking down spherical and flat surfaces in regular ways (in addition to also enabling the experience of any wallpaper group!). Thus the surfaces of the objects one hallucinates on LSD (specially for Closed Eyes Visuals), are usually carpeted with patterns that have either spherical or flat symmetries (e.g. seeing honeycombs, square grids, regular triangulations, etc.; or seeing dodecahedra, cubes, etc.).


17 wallpaper symmetry groups

Only on very high doses of classic psychedelics does one start to experience objects that have hyperbolic curvature. And this is where DMT becomes very relevant. Vaping it is one of the most efficient ways of achieving “unworldly levels of psychedelia”:

On DMT the “symmetry detection threshold” is reduced to such an extent that any surface you look at very quickly gets super-saturated with regular patterns. Since (for reasons we don’t understand) our brain tries to incorporate whatever shape you hallucinate into the scene as part of the scene, the result of seeing too many triangles (or heptagons, or whatever) is that your brain will “push them into the surfaces” and, in effect, turn those surfaces into hyperbolic spaces.HeptagonsIndrasPearls

Yet another part of your brain (or system of consciousness, whatever it turns out to be) recognizes that “wait, this is waaaay too curved somehow, let me try to shape it into something that could actually exist in my universe”. Hence, in practice, if you take between 10 and 20 mg of DMT, the hyperbolic surfaces you see will become bent and contorted (similar to the pictures you find in the article) just so that they can be “embedded” (a term that roughly means “to fit some object into a space without distorting its properties too much”) into your experience of the space around you.

But then there’s a critical point at which this is no longer possible: Even the most contorted embeddings of the hyperbolic surfaces you experience cannot fit any longer in your normal experience of space on doses above 20 mg, so your mind has no other choice but to change the curvature of the 3D space around you! Thus when you go from “very high on DMT” to “super high on DMT” it feels like you are traveling to an entirely new dimension, where the objects you experience do not fit any longer into the normal world of human experience. They exist in H3 (hyperbolic 3D space). And this is in part why it is so extremely difficult to convey the subjective quality of these experiences. One needs to invoke mathematical notions that are unfamiliar to most people; and even then, when they do understand the math, the raw feeling of changing the damn geometry of your experience is still a lot weirder than you could ever anticipate.


Anybody else want to play hyperbolic soccer? Humans vs. Entities, the match of the eon!

Note: The original article goes into more depth

Now that you understand the gist of the original article, I encourage you to take a closer look at it, as it includes content that I didn’t touch in this ELI5 (or 12) summary. It provides a granular description of the 6 levels of DMT experience (Threshold, Chrysanthemum, Magic Eye, Waiting Room, Breakthrough, and Amnesia), many pictures to illustrate the various levels as well as the particular emergent geometries, and a theoretical discussion of the various algorithmic reductions that might explain how the hyperbolization of phenomenal space takes place based on combining a series of simpler effects together.

Principia Qualia: Part II – Valence

Extract from Principia Qualia (2016) by my colleague Michael E. Johnson (from Qualia Research Institute). This is intended to summarize the core ideas of chapter 2, which proposes a precise, testable, simple, and so far science-compatible theory of the fundamental nature of valence (also called hedonic tone or the pleasure-pain axis; what makes experiences feel good or bad).


VII. Three principles for a mathematical derivation of valence

We’ve covered a lot of ground with the above literature reviews, and synthesizing a new framework for understanding consciousness research. But we haven’t yet fulfilled the promise about valence made in Section II- to offer a rigorous, crisp, and relatively simple hypothesis about valence. This is the goal of Part II.

Drawing from the framework in Section VI, I offer three principles to frame this problem: ​

1. Qualia Formalism: for any given conscious experience, there exists- in principle- a mathematical object isomorphic to its phenomenology. This is a formal way of saying that consciousness is in principle quantifiable- much as electromagnetism, or the square root of nine is quantifiable. I.e. IIT’s goal, to generate such a mathematical object, is a valid one.

2. Qualia Structuralism: this mathematical object has a rich set of formal structures. Based on the regularities & invariances in phenomenology, it seems safe to say that qualia has a non-trivial amount of structure. It likely exhibits connectedness (i.e., it’s a unified whole, not the union of multiple disjoint sets), and compactness, and so we can speak of qualia as having a topology.

More speculatively, based on the following:

(a) IIT’s output format is data in a vector space,

(b) Modern physics models reality as a wave function within Hilbert Space, which has substantial structure,

(c) Components of phenomenology such as color behave as vectors (Feynman 1965), and

(d) Spatial awareness is explicitly geometric,

…I propose that Qualia space also likely satisfies the requirements of being a metric space, and we can speak of qualia as having a geometry.

Mathematical structures are important, since the more formal structures a mathematical object has, the more elegantly we can speak about patterns within it, and the closer our words can get to “carving reality at the joints”. ​

3. Valence Realism: valence is a crisp phenomenon of conscious states upon which we can apply a measure.

–> I.e. some experiences do feel holistically better than others, and (in principle) we can associate a value to this. Furthermore, to combine (2) and (3), this pleasantness could be encoded into the mathematical object isomorphic to the experience in an efficient way (we should look for a concise equation, not an infinitely-large lookup table for valence). […]


I believe my three principles are all necessary for a satisfying solution to valence (and the first two are necessary for any satisfying solution to consciousness):

Considering the inverses:

If Qualia Formalism is false, then consciousness is not quantifiable, and there exists no formal knowledge about consciousness to discover. But if the history of science is any guide, we don’t live in a universe where phenomena are intrinsically unquantifiable- rather, we just haven’t been able to crisply quantify consciousness yet.

If Qualia Structuralism is false and Qualia space has no meaningful structure to discover and generalize from, then most sorts of knowledge about qualia (such as which experiences feel better than others) will likely be forever beyond our empirical grasp. I.e., if Qualia space lacks structure, there will exist no elegant heuristics or principles for interpreting what a mathematical object isomorphic to a conscious experience means. But this doesn’t seem to match the story from affective neuroscience, nor from our everyday experience: we have plenty of evidence for patterns, regularities, and invariances in phenomenological experiences. Moreover, our informal, intuitive models for predicting our future qualia are generally very good. This implies our brains have figured out some simple rules-of-thumb for how qualia is structured, and so qualia does have substantial mathematical structure, even if our formal models lag behind.

If Valence Realism is false, then we really can’t say very much about ethics, normativity, or valence with any confidence, ever. But this seems to violate the revealed preferences of the vast majority of people: we sure behave as if some experiences are objectively superior to others, at arbitrarily-fine levels of distinction. It may be very difficult to put an objective valence on a given experience, but in practice we don’t behave as if this valence doesn’t exist.


VIII. Distinctions in qualia: charting the explanation space for valence

Sections II-III made the claim that we need a bottom-up quantitative theory like IIT in order to successfully reverse-engineer valence, Section VI suggested some core problems & issues theories like IIT will need to address, and Section VII proposed three principles for interpreting IIT-style output:

  1. We should think of qualia as having a mathematical representation,
  2. This mathematical representation has a topology and probably a geometry, and perhaps more structure, and
  3. Valence is real; some things do feel better than others, and we should try to explain why in terms of qualia’s mathematical representation.

But what does this get us? Specifically, how does assuming these three things get us any closer to solving valence if we don’t have an actual, validated dataset (“data structure isomorphic to the phenomenology”) from *any* system, much less a real brain?

It actually helps a surprising amount, since an isomorphism between a structured (e.g., topological, geometric) space and qualia implies that any clean or useful distinction we can make in one realm automatically applies in the other realm as well. And if we can explore what kinds of distinctions in qualia we can make, we can start to chart the explanation space for valence (what ‘kind’ of answer it will be).

I propose the following four distinctions which depend on only a very small amount of mathematical structure inherent in qualia space, which should apply equally to qualia and to qualia’s mathematical representation:

  1. Global vs local
  2. Simple vs complex
  3. Atomic vs composite
  4. Intuitively important vs intuitively trivial


Takeaways: this section has suggested that we can get surprising mileage out of the hypothesis that there will exist a geometric data structure isomorphic to the phenomenology of a system, since if we can make a distinction in one domain (math or qualia), it will carry over into the other domain ‘for free’. Given this, I put forth the hypothesis that valence may plausibly be a simple, global, atomic, and intuitively important property of both qualia and its mathematical representation.

IX. Summary of heuristics for reverse-engineering the pattern for valence

Reverse-engineering the precise mathematical property that corresponds to valence may seem like finding a needle in a haystack, but I propose that it may be easier than it appears. Broadly speaking, I see six heuristics for zeroing in on valence:

A. Structural distinctions in Qualia space (Section VIII);

B. Empirical hints from affective neuroscience (Section I);

C. A priori hints from phenomenology;

D. Empirical hints from neurocomputational syntax;

E. The Non-adaptedness Principle;

F. Common patterns across physical formalisms (lessons from physics). None of these heuristics determine the answer, but in aggregate they dramatically reduce the search space.

IX.A: Structural distinctions in Qualia space (Section VIII):

In the previous section, we noted that the following distinctions about qualia can be made: Global vs local; Simple vs complex; Atomic vs composite; Intuitively important vs intuitively trivial. Valence plausibly corresponds to a global, simple, atomic, and intuitively important mathematical property.


Music is surprisingly pleasurable; auditory dissonance is surprisingly unpleasant. Clearly, music has many adaptive signaling & social bonding aspects (Storr 1992; Mcdermott and Hauser 2005)- yet if we subtract everything that could be considered signaling or social bonding (e.g., lyrics, performative aspects, social bonding & enjoyment), we’re still left with something very emotionally powerful. However, this pleasantness can vanish abruptly- and even reverse– if dissonance is added.

Much more could be said here, but a few of the more interesting data points are:

  1. Pleasurable music tends to involve elegant structure when represented geometrically (Tymoczko 2006);
  2. Non-human animals don’t seem to find human music pleasant (with some exceptions), but with knowledge of what pitch range and tempo their auditory systems are optimized to pay attention to, we’ve been able to adapt human music to get animals to prefer it over silence (Snowdon and Teie 2010).
  3. Results suggest that consonance is a primary factor in which sounds are pleasant vs unpleasant in 2- and 4-month-old infants (Trainor, Tsang, and Cheung 2002).
  4. Hearing two of our favorite songs at once doesn’t feel better than just one; instead, it feels significantly worse.

More generally, it feels like music is a particularly interesting case study by which to pick apart the information-theoretic aspects of valence, and it seems plausible that evolution may have piggybacked on some fundamental law of qualia to produce the human preference for music. This should be most obscured with genres of music which focus on lyrics, social proof & social cohesion (e.g., pop music), and performative aspects, and clearest with genres of music which avoid these things (e.g., certain genres of classical music).


X. A simple hypothesis about valence

To recap, the general heuristic from Section VIII was that valence may plausibly correspond to a simple, atomic, global, and intuitively important geometric property of a data structure isomorphic to phenomenology. The specific heuristics from Section IX surveyed hints from a priori phenomenology, hints from what we know of the brain’s computational syntax, introduced the Non-adaptedness Principle, and noted the unreasonable effectiveness of beautiful mathematics in physics to suggest that the specific geometric property corresponding to pleasure should be something that involves some sort of mathematically-interesting patterning, regularity, efficiency, elegance, and/or harmony.

We don’t have enough information to formally deduce which mathematical property these constraints indicate, yet in aggregate these constraints hugely reduce the search space, and also substantially point toward the following:

Given a mathematical object isomorphic to the qualia of a system, the mathematical property which corresponds to how pleasant it is to be that system is that object’s symmetry.


XI. Testing this hypothesis today

In a perfect world, we could plug many peoples’ real-world IIT-style datasets into a symmetry detection algorithm and see if this “Symmetry in the Topology of Phenomenology” (SiToP) theory of valence successfully predicted their self-reported valences.

Unfortunately, we’re a long way from having the theory and data to do that.

But if we make two fairly modest assumptions, I think we should be able to perform some reasonable, simple, and elegant tests on this hypothesis now. The two assumptions are:

  1. We can probably assume that symmetry/pleasure is a more-or-less fractal property: i.e., it’ll be evident on basically all locations and scales of our data structure, and so it should be obvious even with imperfect measurements. Likewise, symmetry in one part of the brain will imply symmetry elsewhere, so we may only need to measure it in a small section that need not be directly contributing to consciousness.
  2. We can probably assume that symmetry in connectome-level brain networks/activity will roughly imply symmetry in the mathematical-object-isomorphic-to-phenomenology (the symmetry that ‘matters’ for valence), and vice-versa. I.e., we need not worry too much about the exact ‘flavor’ of symmetry we’re measuring.

So- given these assumptions, I see three ways to test our hypothesis:

1. More pleasurable brain states should be more compressible (all else being equal).

Symmetry implies compressibility, and so if we can measure the compressibility of a brain state in some sort of broad-stroke fashion while controlling for degree of consciousness, this should be a fairly good proxy for how pleasant that brain state is.


2. Highly consonant/harmonious/symmetric patterns injected directly into the brain should feel dramatically better than similar but dissonant patterns.

Consonance in audio signals generally produces positive valence; dissonance (e.g., nails-on-a-chalkboard) reliably produces negative valence. This obviously follows from our hypothesis, but it’s also obviously true, so we can’t use it as a novel prediction. But if we take the general idea and apply it to unusual ways of ‘injecting’ a signal into the brain, we should be able to make predictions that are (1) novel, and (2) practically useful.

TMS is generally used to disrupt brain functions by oscillating a strong magnetic field over a specific region to make those neurons fire chaotically. But if we used it on a lower-powered, rhythmic setting to ‘inject’ a symmetric/consonant pattern directly into parts of the brain involved directly with consciousness, the result should produce good feeling- or at least, much better valence than a similar dissonant pattern.

Our specific prediction: direct, low-power, rhythmic stimulation (via TMS) of the thalamus at harmonic frequencies (e.g., @1hz+2hz+4hz+6hz+8hz+12hz+16hz+24hz+36hz+48hz+72hz+96hz+148hz) should feel significantly more pleasant than similar stimulation at dissonant frequencies (e.g., @1.01hz+2.01hz+3.98hz+6.02hz+7.99hz+12.03hz+16.01hz+24.02hz+35.97hz+48.05hz+72.04hz+95.94hz+ 147.93hz).


3. More consonant vagus nerve stimulation (VNS) should feel better than dissonant VNS.

The above harmonics-based TMS method would be a ‘pure’ test of the ‘Symmetry in the Topology of Phenomenology’ (SiToP) hypothesis. It may rely on developing custom hardware and is also well outside of my research budget.

However, a promising alternative method to test this is with consumer-grade vagus nerve stimulation (VNS) technology. Nervana Systems has an in-ear device which stimulates the Vagus nerve with rhythmic electrical pulses as it winds its way past the left ear canal. The stimulation is synchronized with either user-supplied music or ambient sound. This synchronization is done, according to the company, in order to mask any discomfort associated with the electrical stimulation. The company says their system works by “electronically signal[ing] the Vagus nerve which in turn stimulates the release of neurotransmitters in the brain that enhance mood.”

This explanation isn’t very satisfying, since it merely punts the question of why these neurotransmitters enhance mood, but their approach seems to work– and based on the symmetry/harmony hypothesis we can say at least something about why: effectively, they’ve somewhat accidentally built a synchronized bimodal approach (coordinated combination of music+VNS) for inducing harmony/symmetry in the brain. This is certainly not the only component of how this VNS system functions, since the parasympathetic nervous system is both complex and powerful by itself, but it could be an important component.

Based on our assumptions about what valence is, we can make a hierarchy of predictions:

  1. Harmonious music + synchronized VNS should feel the best;
  2. Harmonious music + placebo VNS (unsynchronized, simple pattern of stimulation) should feel less pleasant than (1);
  3. Harmonious music + non-synchronized VNS (stimulation that is synchronized to a different kind of music) should feel less pleasant than (1);
  4. Harmonious music + dissonant VNS (stimulation with a pattern which scores low on consonance measures such as (Chon 2008) should feel worse than (2) and (3));
  5. Dissonant auditory noise + non-synchronized, dissonant VNS should feel pretty awful.

We can also predict that if a bimodal approach for inducing harmony/symmetry in the brain is better than a single modality, a trimodal or quadrimodal approach may be even more effective. E.g., we should consider testing the addition of synchronized rhythmic tactile stimulation and symmetry-centric music visualizations. A key question here is whether adding stimulation modalities would lead to diminishing or synergistic/accelerating returns.

Qualia Computing Attending the 2017 Psychedelic Science Conference

From the 19th to the 24th of April I will be hanging out at Psychedelic Science 2017 (if you are interested in attending but have not bought the tickets: remember you can register until the 14th of February).

12020058_10206806127125111_5414514709501746096_nIn case you enjoy Qualia Computing and you are planning on going, now you can meet the human who is mostly responsible for these articles. I am looking forward to meeting a lot of awesome researchers. If you see me and enjoy what I do, don’t be afraid to say hi.

Why Care About Psychedelics?

Although the study of psychedelics and their effects is not a terminal value here in Qualia Computing, they are instrumental in achieving the main goals. The core philosophy of Qualia Computing is to (1) map out the state-space of possible experiences, (2) identify the computational properties of consciousness, and (3) reverse-engineer valence so as to find the way to stay positive without going insane.

Psychedelic experiences happen to be very informative and useful in making progress towards these three goals. The quality and magnitude of the consciousness alteration that they induce lends itself to exploring these questions. First, the state-space of humanly accessible experiences is greatly amplified once you add psychedelics into the mix. Second, the nature of these experiences is all but computationally dull (cf. alcohol and opioids). On the contrary, psychedelic experiences involve non-ordinary forms of qualia computing: the textures of consciousness interact in non-trivial ways, and it stands to reason that some combinations of these textures will be recruited in the future for more than aesthetic purposes. They will be used for computational purposes, too. And third, psychedelic states greatly amplify the range of valence (i.e. the maximum intensity of both pain and pleasure). They unlock the possibility of experiencing peak bliss as well as intense suffering. This strongly suggests that whatever underpins valence at the fundamental level, psychedelics are able to amplify it to a fantastic (and terrifying) extent. Thus, serious valence research will undoubtedly benefit from psychedelic science.

It is for this reason that psychedelics have been a major topic explored here since the beginning of this project. Here is a list of articles that directly deal with the subject:

List of Qualia Computing Psychedelic Articles

1) Psychophysics For Psychedelic Research: Textures

How do you make a psychophysical experiment that tells you something foundational about the information-processing properties of psychedelic perception? I proposed to use an experimental approach invented by Benjamin J. Balas based on the anatomically-inspired texture analysis and synthesis techniques developed by Eero Simoncelli. In brief, one seeks to determine which summary statistics are sufficient to create perceptual (textural) metamers. In turn, in the context of psychedelic research, this can help us determine which statistical properties are best discriminated while sober and which differences are amplified while on psychedelics.

2) State-Space of Drug Effects

I distributed a survey in which I asked people to rate drug experiences along 60 different dimensions. I then conducted factor analysis on these responses. This way I empirically derived six major latent traits that account more than half of the variance across all drug experiences. Three of these factors are tightly related to valence, which suggests that hedonic-recalibration might benefit from a multi-faceted approach.

3) How to Secretly Communicate with People on LSD

I suggest that control interruption (i.e. the failure of feedback inhibition during psychedelic states) can be employed to transmit information in a secure way to people who are in other states of consciousness. A possible application of this technology might be: You and your friends at Burning Man want to send a secret message to every psychedelic user on a particular camp in such a way that no infiltrated cop is able to decode it. To do so you could instantiate the techniques outlined in this article on a large LED display.

4) The Hyperbolic Geometry of DMT Experiences: Symmetries, Sheets, and Saddled Scenes

This article discusses the phenomenology of DMT states from the point of view of differential geometry. In particular, an argument is provided in favor of the view that high grade psychedelia usually involves a sort of geometric hyperbolization of phenomenal space.

5) LSD and Quantum Measurements: Can you see Schrödinger’s cat both dead and alive on acid?

We provide an empirical method to test the (extremely) wild hypothesis that it is possible to experience “multiple branches of the multiverse at once” on high doses of psychedelics. The point is not to promote a particular interpretation of such experiences. Rather, the points is that we can actually generate predictions from such interpretations and then go ahead and test them.

6) Algorithmic Reduction of Psychedelic States

People report a zoo of psychedelic effects. However, as in most things in life, there may be a relatively small number of basic effects that, when combined, can account for the wide variety of phenomena we actually observe. Algorithmic reductions are proposed as a conceptual framework for analyzing psychedelic experiences. Four candidate main effects are proposed.

7) Peaceful Qualia: The Manhattan Project of Consciousness

Imagine that there was a world-wide effort to identify the varieties of qualia that promote joy and prosocial behavior at the same time. Could these be used to guarantee world peace? By giving people free access to the most valuable and prosocial states of consciousness one may end up averting large-scale conflict in a sustainable way. This articles explores how this investigation might be carried out and proposes organizational principles for such a large-scale research project.

8) Getting closer to digital LSD

Why are the Google Deep Dream pictures so trippy? This is not just a coincidence. People call them trippy for a reason.

9) Generalized Wada-Test

In a Wada-test a surgeon puts half of your brain to sleep and evaluates the cognitive skills of your awake half. Then the process is repeated in mirror image. Can we generalize this procedure? Imagine that instead of just putting a hemisphere to sleep we gave it psychedelics. What would it feel like to be tripping, but only with your right hemisphere? Even more generally: envision a scheme in which one alternates a large number of paired states of consciousness and study their mixtures empirically. This way it may be possible to construct a network of “opinions that states of consciousness have about each other”. Could this help us figure out whether there is a universal scale for subjective value (i.e. valence)?

10) Psychedelic Perception of Visual Textures

In this article I discuss some problems with verbal accounts of psychedelic visuals, and I invite readers to look at some textures (provided in the article) and try to describe them while high on LSD, 2C-B, DMT, etc. You can read some of the hilarious comments already left in there.

11) The Super-Shulgin Academy: A Singularity I Can Believe In

Hard to summarize.


The Binding Problem

[Our] subjective conscious experience exhibits a unitary and integrated nature that seems fundamentally at odds with the fragmented architecture identified neurophysiologically, an issue which has come to be known as the binding problem. For the objects of perception appear to us not as an assembly of independent features, as might be suggested by a feature based representation, but as an integrated whole, with every component feature appearing in experience in the proper spatial relation to every other feature. This binding occurs across the visual modalities of color, motion, form, and stereoscopic depth, and a similar integration also occurs across the perceptual modalities of vision, hearing, and touch. The question is what kind of neurophysiological explanation could possibly offer a satisfactory account of the phenomenon of binding in perception?
One solution is to propose explicit binding connections, i.e. neurons connected across visual or sensory modalities, whose state of activation encodes the fact that the areas that they connect are currently bound in subjective experience. However this solution merely compounds the problem, for it represents two distinct entities as bound together by adding a third distinct entity. It is a declarative solution, i.e. the binding between elements is supposedly achieved by attaching a label to them that declares that those elements are now bound, instead of actually binding them in some meaningful way.
Von der Malsburg proposes that perceptual binding between cortical neurons is signalled by way of synchronous spiking, the temporal correlation hypothesis (von der Malsburg & Schneider 1986). This concept has found considerable neurophysiological support (Eckhorn et al. 1988, Engel et al. 1990, 1991a, 1991b, Gray et al. 1989, 1990, 1992, Gray & Singer 1989, Stryker 1989). However although these findings are suggestive of some significant computational function in the brain, the temporal correlation hypothesis as proposed, is little different from the binding label solution, the only difference being that the label is defined by a new channel of communication, i.e. by way of synchrony. In information theoretic terms, this is no different than saying that connected neurons posses two separate channels of communication, one to transmit feature detection, and the other to transmit binding information. The fact that one of these channels uses a synchrony code instead of a rate code sheds no light on the essence of the binding problem. Furthermore, as Shadlen & Movshon (1999) observe, the temporal binding hypothesis is not a theory about how binding is computed, but only how binding is signaled, a solution that leaves the most difficult aspect of the problem unresolved.
I propose that the only meaningful solution to the binding problem must involve a real binding, as implied by the metaphorical name. A glue that is supposed to bind two objects together would be most unsatisfactory if it merely labeled the objects as bound. The significant function of glue is to ensure that a force applied to one of the bound objects will automatically act on the other one also, to ensure that the bound objects move together through the world even when one, or both of them are being acted on by forces. In the context of visual perception, this suggests that the perceptual information represented in cortical maps must be coupled to each other with bi-directional functional connections in such a way that perceptual relations detected in one map due to one visual modality will have an immediate effect on the other maps that encode other visual modalities. The one-directional axonal transmission inherent in the concept of the neuron doctrine appears inconsistent with the immediate bi-directional relation required for perceptual binding. Even the feedback pathways between cortical areas are problematic for this function due to the time delay inherent in the concept of spike train integration across the chemical synapse, which would seem to limit the reciprocal coupling between cortical areas to those within a small number of synaptic connections. The time delays across the chemical synapse would seem to preclude the kind of integration apparent in the binding of perception and consciousness across all sensory modalities, which suggests that the entire cortex is functionally coupled to act as a single integrated unit.
— Section 5 of “Harmonic Resonance Theory: An Alternative to the ‘Neuron Doctrine’ Paradigm of Neurocomputation to Address Gestalt properties of perception” by Steven Lehar

Beyond Turing: A Solution to the Problem of Other Minds Using Mindmelding and Phenomenal Puzzles

Here is my attempt at providing an experimental protocol to determine whether an entity is conscious.

If you are just looking for the stuffed animal music video skip to 23:28.

Are you the only conscious being in existence? How could we actually test whether other beings have conscious minds?

Turing proposed to test the existence of other minds by measuring their verbal indistinguishability from humans (the famous “Turing Test” asks computers to pretend to be humans and checks if humans buy the impersonations). Others have suggested the solution is as easy as connecting your brain to the brain of the being you want to test.

But these approaches fail for a variety of reasons. Turing tests can be beaten by dream characters and mindmelds might merely work by giving you a “hardware upgrade”. There is no guarantee that the entity tested will be conscious on its own. As pointed out by Brian Tomasik and Eliezer Yudkowsky, even if the information content of your experience increases significantly by mindmelding with another entity, this could still be the result of the entity’s brain working as an exocortex: it is completely unconscious on its own yet capable of enhancing your consciousness.

In order to go beyond these limiting factors, I developed the concept of a “phenomenal puzzle”. These are problems that can only be solved by a conscious being in virtue of requiring inner qualia operations for their solution. For example, a phenomenal puzzle is to arrange qualia values of phenomenal color in a linear map where the metric is based on subjective Just Noticeable Differences.

To conduct the experiment you need:

  1. A phenomenal bridge (e.g. a biological neural network that connects your brain to someone else’s brain so that both brains now instantiate a single consciousness).
  2. A qualia calibrator (a device that allows you to cycle through many combinations of qualia values quickly so that you can compare the sensory-qualia mappings in both brains and generate a shared vocabulary for qualia values).
  3. A phenomenal puzzle (as described above).
  4. The right set and setting: the use of a proper protocol.

Here is an example protocol that works for 4) – though there may be other ones that work as well. Assume that you are person A and you are trying to test if B is conscious:

A) Person A learns about the phenomenal puzzle but is not given enough time to solve it.
B) Person A and B mindmeld using the phenomenal bridge, creating a new being AB.
C) AB tells the phenomenal puzzle to itself (by remembering it from A’s narrative).
D) A and B get disconnected and A is sedated (to prevent A from solving the puzzle).
E) B tries to solve the puzzle on its own (the use of computers not connected to the internet is allowed to facilitate self-experimentation).
F) When B claims to have solved it A and B reconnect into AB.
G) AB then tells the solution to itself so that the records of it in B’s narrative get shared with A’s brain memory.
H) Then A and B get disconnected again and if A is able to provide the answer to the phenomenal puzzle, then B must have been conscious!

To my knowledge, this is the only test of consciousness for which a positive result is impossible (or maybe just extremelly difficult?) to explain unless B is conscious.

Of course B could be conscious but not smart enough to solve the phenomenal puzzle. The test simply guarantees that there will be no false positives. Thus it is not a general test for qualia – but it is a start. At least we can now conceive of a way to know (in principle) whether some entities are conscious (even if we can’t tell that any arbitrary entity is). Still, a positive result would completely negate solipsism, which would undoubtedly be a great philosophical victory.

Core Philosophy

David Pearce asked me ages ago to make accesible videos about transhumanism, consciousness and the abolitionist project. Well, here is a start

In this video I outline the core philosophy and objectives of Qualia Computing. There are three main goals here:


  1. Catalogue the entire state-space of consciousness
  2. Identify the computational properties of each experience (and its qualia components), and
  3. Reverse engineer valence (i.e. to discover the function that maps formal descriptions of states of consciousness to values in the pleasure-pain axis)


While describing the 1st objective I explain that we start by realizing that consciousness is doing something useful (or evolution would not have been able to recruit it for information-processing purposes). I also go on to explain the difference between qualia varieties (e.g. phenomenal color, smell, touch, thought, etc.) and qualia values (i.e. the specific points in the state-spaces defined by the varieties, such as “pure phenomenal blue” or the smell of cardamom).


With regards to the 2nd objective, I explain that our minds actually use the specific properties of each qualia variety in order to represent information states and then to solve computational problems. We are only getting started in this project.


And 3rd, I argue that discovering exactly what makes an experience “worth living” in a formal and mathematical way is indeed ethically urgent. With a fundamental understanding of valence we can develop precise interventions to reduce (even prevent altogether) any form of suffering without messing up with our capacity to think and explore the state-space of consciousness (at least the valuable part of it).


I conclude by pointing out that the 1st and 2nd research programs actually interact in non-trivial ways: There is a synergy between them which may lead us to a recursively self-improving intelligence (and do so in a far “safer” way than trying to build an AGI through digital software).

David Pearce on the “Schrodinger’s Neurons Conjecture”

My friend +Andrés Gómez Emilsson on Qualia Computing: LSD and Quantum Measurements: Can you see Schrödinger’s cat both dead and alive on acid?


Most truly radical intellectual progress depends on “crazy” conjectures. Unfortunately, few folk who make crazy conjectures give serious thought to extracting novel, precise, experimentally falsifiable predictions to confound their critics. Even fewer then publish the almost inevitable negative experimental result when their crazy conjecture isn’t confirmed. So kudos to Andrés for doing both!!


What would the world look like if the superposition principle never breaks down, i.e. the unitary Schrödinger dynamics holds on all scales, and not just the microworld? The naïve – and IMO mistaken – answer is that without the “collapse of the wavefunction”, we’d see macroscopic superpositions of live-and-dead cats, experiments would never appear to have determinate outcomes, and the extremely well tested Born rule (i.e. the probability of a result is the squared absolute value of the inner product) would be violated. Or alternatively, assuming DeWitt’s misreading of Everett, if the superposition principle never breaks down, then when you observe a classical live cat, or a classical dead cat, your decohered (“split”) counterpart in a separate classical branch of the multiverse sees a dead cat or a live cat, respectively.


In my view, all these stories rest on a false background assumption. Talk of “observers” and “observations” relies on a naïve realist conception of perception whereby you (the “observer”) somehow hop outside of your transcendental skull to inspect the local mind-independent environment (“make an observation”). Such implicit perceptual direct realism simply assumes – rather than derives from quantum field theory – the existence of unified observers (“global” phenomenal binding) and phenomenally-bound classical cats and individually detected electrons striking a mind-independent classical screen cumulatively forming a non-classical interference pattern (“local” phenomenal binding). Perception as so conceived – as your capacity for some sort of out-of-body feat of levitation – isn’t physically possible. The role of the mind-independent environment beyond one’s transcendental skull is to select states of mind internal to your world-simulation; the environment can’t create, or imprint its signature on, your states of mind (“observations”) – any more than the environment can create or imprint its signature on your states of mind while you’re dreaming.


Here’s an alternative conjecture – a conjecture that holds regardless of whether you’re drug-naïve, stone-cold sober, having an out-of-body experience on ketamine, awake or dreaming, or tripping your head off on LSD. You’re experiencing “Schrodinger’s cat” states right nowin virtue of instantiating a classical world-simulation. Don’t ask what’s it like to perceive a live-and-dead Schrödinger’s cat; ask instead what it’s like to instantiate a coherent superposition of distributed feature-processing neurons. Only the superposition principle allows you to experience phenomenally-bound classical objects that one naively interprets as lying in the mind-independent world. In my view, the universal validity of the superposition principle allows you to experience a phenomenally bound classical cat within a seemingly classical world-simulation – or perform experiments with classical-looking apparatus that have definite outcomes, and confirm the Born rule. Only the vehicle of individual coherent superpositions of distributed neuronal feature-processors allows organic mind-brains to run world simulations described by an approximation of classical Newtonian physics. In the mind-independent world – i.e. not the world of your everyday experience – the post-Everett decoherence program in QM pioneered by Zeh, Zurek et al. explains the emergence of an approximation of classical “branches” for one’s everyday world-simulations to track. Yet within the CNS, only the superposition principle allows you to run a classical world-simulation tracking such gross fitness-relevant features of your local extracranial environment. A coherent quantum mind can run phenomenally-bound simulations of a classical world, but a notional classical mind couldn’t phenomenally simulate a classical world – or phenomenally simulate any other kind of world. For a supposedly “classical” mind would just be patterns of membrane-bound neuronal mind-dust: mere pixels of experience, a micro-experiential zombie.


Critically, molecular matter-wave interferometry can in principle independently be used to test the truth – or falsity – of this conjecture (see: https://www.physicalism.com/#6).


OK, that’s the claim. Why would (almost) no scientifically informed person take the conjecture seriously?


In a word, decoherence.


On a commonsense chronology of consciousness, our experience of phenomenally bound perceptual objects “arises” via patterns of distributed neuronal firings over a timescale of milliseconds – the mystery lying in how mere synchronised firing of discrete, decohered, membrane-bound neurons / micro-experiences could generate phenomenal unity, whether local or global. So if the lifetime of coherent superpositions of distributed neuronal feature-processors in the CNS were milliseconds, too, then there would be an obvious candidate for a perfect structural match between the phenomenology of our conscious minds and neurobiology / fundamental physics – just as I’m proposing above. Yet of course this isn’t the case. The approximate theoretical lifetimes of coherent neuronal superpositions in the CNS can be calculated: femtoseconds or less. Thermally-induced decoherence is insanely powerful and hard to control. It’s ridiculous – intuitively at any rate – to suppose that such fleeting coherent superpositions could be recruited to play any functional role in the living world. An epic fail!


Too quick.
Let’s step back.
Many intelligent people initially found it incredible that natural selection could be powerful enough to throw up complex organisms as thermodynamically improbable as Homo sapiens. We now recognise that the sceptics were mistaken: the human mind simply isn’t designed to wrap itself around evolutionary timescales of natural selection playing out over hundreds of millions of years. In the CNS, another form of selection pressure plays out – a selection pressure over one hundred of orders of magnitude (sic) more powerful than selection pressure on information-bearing self-replicators as conceived by Darwin. “Quantum Darwinism” as articulated by Zurek and his colleagues isn’t the shallow, tricksy metaphor one might naively assume; and the profound implications of such a selection mechanism must be explored for the world-simulation running inside your transcendental skull, not just for the extracranial environment. At work here is unimaginably intense selection pressure favouring comparative resistance to thermally (etc)-induced decoherence [i.e. the rapid loss of coherence of complex phase amplitudes of the components of a superposition] of functionally bound phenomenal states of mind in the CNS. In my view, we face a failure of imagination of the potential power of selection pressure analogous to the failure of imagination of critics of Darwin’s account of human evolution via natural selection. It’s not enough lazily to dismiss sub-femtosecond decoherence times of neuronal superpositions in the CNS as the reductio ad absurdum of quantum mind. Instead, we need to do the interferometry experiments definitively to settle the issue, not (just) philosophize.


Unfortunately, unlike Andrés, I haven’t been able to think of a DIY desktop experiment that could falsify or vindicate the conjecture. The molecular matter-wave experiment I discuss in “Schrodinger’s Neurons” is conceptually simple but (horrendously) difficult in practice. And the conjecture it tests is intuitively so insane that I’m sometimes skeptical the experiment will ever get done. If I sound like an advocate rather than a bemused truth-seeker, I don’t mean to be so; but if phenomenal binding _isn’t _quantum-theoretically or classically explicable, then dualism seems unavoidable. In that sense, David Chalmers is right.


How come I’m so confident that superposition principle doesn’t break down in the CNS? After all, the superposition principle has been tested only up to the level of fullerenes, and no one yet has a proper theory of quantum gravity. Well, besides the classical impossibility of the manifest phenomenal unity of consciousness, and the cogent reasons that a physicist would give you for not modifying the unitary Schrödinger dynamics, the reason is really just a philosophical prejudice on my part. Namely, the universal validity of the superstition principle of QM offers the only explanation-space that I can think of for why anything exists at all: an informationless zero ontology dictated by the quantum analogue of the library of Babel.


We shall see.

– David Pearce, commenting on the latest significant article published on this blog.

LSD and Quantum Measurements: Can you see Schrödinger’s cat both dead and alive on acid?

[Content Warnings: Psychedelic Depersonalization, Fear of the Multiverse, Personal Identity Doubts, Discussion about Quantum Consciousness, DMT entities, Science]

The brain is wider than the sky,
For, put them side by side,
The one the other will include
With ease, and you beside.

– Emily Dickinson

Is it for real?

A sizable percentage of people who try a high dose of DMT end up convinced that the spaces they visit during the trip exist in some objective sense; they either suspect, intuit or conclude that their psychonautic experience reflects something more than simply the contents of their minds. Most scientists would argue that those experiences are just the result of exotic brain states; the worlds one travels to are bizarre (often useless) simulations made by our brain in a chaotic state. This latter explanation space forgoes alternate realities for the sake of simplicity, whereas the former envisions psychedelics as a multiverse portal technology of some sort.

Some exotic states, such as DMT breakthrough experiences, do typically create feelings of glimpsing foundational information about the depth and structure of the universe. Entity contact is frequent, and these seemingly autonomous DMT entities are often reported to have the ability to communicate with you. Achieving a verifiable contact with entities from another dimension would revolutionize our conception of the universe. Nothing would be quite as revolutionary, really. But how to do so? One could test the external reality of these entities by asking them to provide information that cannot be obtained unless they themselves held an objective existence. In this spirit, some have proposed to ask these entities complex mathematical questions that would be impossible for a human to solve within the time provided by the trip. This particular test is really cool, but it has the flaw that DMT experiences may themselves trigger computationally-useful synesthesia of the sort that Daniel Tammet experiences. Thus even if DMT entities appeared to solve extraordinary mathematical problems, it would still stand to reason that it is oneself who did it and that one is merely projecting the results into the entities. The mathematical ability would be the result of being lucky in the kind of synesthesia DMT triggered in you.

A common overarching description of the effects of psychedelics is that they “raise the frequency of one’s consciousness.” Now, this is a description we should take seriously whether or not we believe that psychedelics are inter-dimensional portals. After all, promising models of psychedelic action involve fast-paced control interruption, where each psychedelic would have its characteristic control interrupt frequency. And within a quantum paradigm, Stuart Hameroff has argued that psychedelic compounds work by bringing up the quantum resonance frequency of the water inside our neurons’ microtubules (perhaps going from megahertz to gigahertz), which he claims increases the non-locality of our consciousness.

In the context of psychedelics as inter-dimensional portals, this increase in the main frequency of one’s consciousness may be the key that allows us to interact with other realities. Users describe a sort of tuning of one’s consciousness, as if the interface between one’s self and the universe underwent some sudden re-adjustment in an upward direction. In the same vein, psychedelicists (e.g. Rick Strassman) frequently describe the brain as a two-way radio, and then go on to claim that psychedelics expand the range of channels we can be attuned to.

One could postulate that the interface between oneself and the universe that psychonauts describe has a real existence of its own. It would provide the bridge between us as (quantum) monads and the universe around us; and the particular structure of this interface would determine the selection pressures responsible for the part of the multiverse that we interact with. By modifying the spectral properties of this interface (e.g. by drastically raising the main frequency of its vibration) with, e.g. DMT, one effectively “relocates” (cf. alien travel) to other areas of reality. Assuming this interface exists and that it works by tuning into particular realities, what sorts of questions can we ask about its properties? What experiments could we conduct to verify its existence? And what applications might it have?

The Psychedelic State of Input Superposition

Once in a while I learn about a psychedelic effect that captures my attention precisely because it points to simple experiments that could distinguish between the two rough explanation spaces discussed above (i.e. “it’s all in your head” vs. “real inter-dimensional travel”). This article will discuss a very odd phenomenon whose interpretations do indeed have different empirical predictions. We are talking about the experience of sensing what appears to be a superposition of inputs from multiple adjacent realities. We will call this effect the Psychedelic State of Input Superposition (PSIS for short).

There is no known way to induce PSIS on purpose. Unlike the reliable DMT hyper-dimensional journeys to distant dimensions, PSIS is a rare closer-to-home effect and it manifests only on high doses of LSD (and maybe other psychedelics). Rather than feeling like one is tuning into another dimension in the higher frequency spectrum, it feels as if one just accidentally altered (perhaps even broke) the interface between the self and the universe in a way that multiplies the number of realities you are interacting with. After the event, the interface seems to tune into multiple similar universes at once; one sees multiple possibilities unfold simultaneously. After a while, one somehow “collapses” into only one of these realities, and while coming down, one is thankful to have settled somewhere specific rather than remaining in that weird in-between. Let’s take a look at a couple of trip reports that feature this effect:

[Trip report of taking a high dose of LSD on an airplane]: So I had what you call “sonder”, a moment of clarity where I realized that I wasn’t the center of the universe, that everyone is just as important as me, everyone has loved ones, stories of lost love etc, they’re the main character in their own movies.


That’s when shit went quantum. All these stories begun sinking in to me. It was as if I was beginning to experience their stories simultaneously. And not just their stories, I began seeing the story of everyone I had ever met in my entire life flash before my eyes. And in this quantum experience, there was a voice that said something about Karma. The voice told me that the plane will crash and that I will be reborn again until the quota of my Karma is at -+0. So, for every ill deed I have done, I would have an ill deed committed to me. For every cheap T-shirt I purchased in my previous life, I would live the life of the poor Asian sweatshop worker sewing that T-shirt. For every hooker I fucked, I would live the life of a fucked hooker.


And it was as if thousands of versions of me was experiencing this moment. It is hard to explain, but in every situation where something could happen, both things happened and I experienced both timelines simultaneously. As I opened my eyes, I noticed how smoke was coming out of the top cabins in the plane. Luggage was falling out. I experienced the airplane crashing a thousand times, and I died and accepted death a thousand times, apologizing to the Karma God for my sins. There was a flash of the brightest white light imagineable and the thousand realities in which I died began fading off. Remaining was only one reality in which the crash didn’t happen. Where I was still sitting in the plane. I could still see the smoke coming out of the plane and as a air stewardess came walking by I asked her if everything was alright. She said “Yes, is everything alright with YOU?”.


— Reddit user I_DID_LSD_ON_A_PLANE, in r/BitcoinMarkets (why there? who knows).

Further down on the same thread, written by someone else:

[A couple hours after taking two strong hits of LSD]: Fast-forward to when I’m peaking hours later and I find myself removed from the timeline I’m in and am watching alternate timelines branch off every time someone does something specific. I see all of these parallel universes being created in real time, people’s actions or interactions marking a split where both realities exist. Dozens of timelines, at least, all happening at once. It was fucking wild to witness.


Then I realize that I don’t remember which timeline I originally came out of and I start to worry a bit. I start focusing, trying to remember where I stepped out of my particular universe, but I couldn’t figure it out. So, with the knowledge that I was probably wrong, I just picked one to go back into and stuck with it. It’s not like I would know what changed anyway, and I wasn’t going to just hang out here in the whatever-this-place-is outside of all of them.


Today I still sometimes feel like I left a life behind and jumped into a new timeline. I like it, I feel like I left a lot of baggage behind and there are a lot of regrets and insecurities I had before that trip that I don’t have anymore. It was in a different life, a different reality, so in this case the answer I found was that it’s okay to start over when you’re not happy with where you are in life.


— GatorAutomator

Let us summarize: Person X takes a lot of LSD. At some point during the trip (usually after feeling that “this trip is way too intense for me now”) X starts experiencing sensory input from what appear to be different branches of the multiverse. For example, imagine that person X can see a friend Y sitting on a couch in the corner. Suppose that Y is indecisive, and that as a result he makes different choices in different branches of the multiverse. If Y is deciding whether to stand up or not, X will suddenly see a shadowy figure of Y standing up while another shadowy figure of Y remains sitting. Let’s call them Y-sitting and Y-standing. If Y-standing then turns indecisive about whether to drink some water or go to the bathroom, X may see one shadowy figure of Y-standing getting water and a shadowy figure of Y-standing walking towards the bathroom, all the while Y-sitting is still on the couch. And so it goes. The number of times per second that Y splits and the duration of the perceived superposition of these splits may be a function of X’s state of consciousness, the substance and dose consumed, and the degree of indecision present in Y’s mind.

The two quotes provided are examples of this effect, and one can find a number of additional reports online with stark similarities. There are two issues at hand here. First, what is going on? And second, can we test it? We will discuss three hypotheses to explain what goes on during PSIS, propose an experiment to test the third one (the Quantum Hypothesis), and provide the results of such an experiment.

Hard-nosed scientists may want to skip to the “Experiment” section, since the following contains a fair amount of speculation (you have been warned).

Three Hypothesis for PSIS: Cognitive, Spiritual, Quantum

In order to arrive at an accurate model of the world, one needs to take into account both the prior probability of the hypothesis and the likelihoods that they predict that one would obtain the available evidence. Even if one prior of yours is extremely strong (e.g. a strong belief in materialism), it is still rational to update one’s probability estimates of alternative hypotheses when new relevant evidence is provided. The difficulty often comes from finding experiments where the various hypotheses generate very different likelihoods for one’s observations.  As we will see, the quantum hypothesis has this characteristic: it is the only one that would actually predict a positive result for the experiment.

The Cognitive Hypothesis

The first (and perhaps least surreal) hypothesis is that PSIS is “only in one’s mind”. When person X sees person Y both standing up and staying put, what may be happening is that X is receiving photons only from Y-standing and that Y-sitting is just a hallucination that X’s inner simulation of her environment failed to erase.

Psychedelics intensify one’s experience, and this is thought to be the result of control interruption. This means that inhibition of mental content by cortical feedback is attenuated. In the psychedelic state, sensory impressions, automatic reactions, feelings, thoughts and all other mental contents are more intense and longer-lived. This includes the predictions that you make about how your environment will evolve. Not only is one’s sensory input perceived as more intense, one’s imagined hypotheticals are also perceived more intensely.

Under normal circumstances, cortical inhibition makes our failed predictions quickly disappear. Psychedelic states of consciousness may be poor at inhibiting these predictions. In this account, X may be experiencing her brain’s past predictions of what Y could have done overlaid on top of the current input that she is receiving from her physical environment. In a sense, she may be experiencing all of the possible “next steps” that she simply intuited. While these simulations typically remain below the threshold of awareness (or just above it), on a psychedelic state they may reinforce themselves in unpredictable ways. X’s mind never traveled anywhere and there is nothing really weird going on. X is just experiencing the aftermath of a specific failure of information processing concerning the inhibition of past predictions.

Alternatively, very intense emotions such as those experienced on intense ego-killing psychedelic experiences may distort one’s perception so much that one begins to suspect that one is perhaps dead or in another dimension. We can posit that the belief that one is not properly connected to one’s brain (or that one is dying) can trigger even stronger emotions and unleash a cascade of further distortions. This positive feedback loop may create episodes of intense confusion and overlapping pieces of information, which later might be interpreted as “seeing splitting universes”.

The Spiritual Hypothesis

Many spiritual traditions postulate the existence of alternate dimensions, additional layers of reality, and hidden spirit pathways that connect all of reality. These traditions often provide rough maps of these realities and may claim that some people are able to travel to such far-out regions with mental training and consciousness technologies. For illustration, let’s consider Buddhist cosmology, which describes 31 planes of existence. Interestingly, one of the core ideas of this cosmology is that the major characteristic that distinguishes the planes of existence is the states of consciousness typical of their inhabitants. These states of consciousness are correlated with moral conditions such as the ethical quality of their past deeds (karma), their relationship with desire (e.g. whether it is compulsive, sustainable or indifferent) and their existential beliefs. In turn, a feature of this cosmology is that it allows inter-dimensional travel by changing one’s state of consciousness. The part of the universe one interacts with is a function of one’s karma, affinities and beliefs. So by changing these variables with meditation (or psychedelic medicine) one can also change which world we exist in.

An example of a very interesting location worth trying to travel to is the mythical city of Shambhala, the location of the Kalachakra Tantra. This city has allegedly turned into a pure land thanks to the fact that its king converted to Buddhism after meeting the Buddha. Pure lands are abodes populated by enlightened and quasi-enlightened beings whose purpose is to provide an optimal teaching environment for Buddhism. One can go to Shambhala by either reincarnating there (with good karma and the help of some pointers and directions at the time of death) or by traveling there directly during meditation. In order to do the latter, one needs to kindle one’s subtle energies so that they converge on one’s heart, while one is embracing the Bodhisattva ethic (focusing on reducing others’ suffering as a moral imperative). Shambhala may not be in a physical location accessible to humans. Rather, Buddhist accounts would seem to depict it as a collective reality built by people which manifests on another plane of existence (specifically somewhere between the 23rd and 27th layer). In order to create a place like that one needs to bring together many individuals in a state of consciousness that exhibits bliss, enlightenment and benevolence. A pure land has no reality of its own; its existence is the result of the states of consciousness of its inhabitants. Thus, the very reason why Shambhala can even exist as a place somewhere outside of us is because it is already a potential place that exists within us.

Similar accounts of a wider cosmological reality can be found elsewhere (such as Hinduism, Zoroastrianism, Theosophy, etc.). These accounts may be consistent with the sort of experiences having to do with astral travel and entity contact that people have while on DMT and other psychedelics in high doses. However, it seems a lot harder to explain PSIS with an ontology of this sort. While reality is indeed portrayed as immensely vaster than what science has shown so far, we do not really encounter claims of parallel realities that are identical to ours except that your friend decided to go to the bathroom rather than drink some water just now. In other words, while many spiritual ontologies are capable of accommodating DMT hyper-dimensional travel, I am not aware of any spiritual worldview that also claims that whenever two things can happen, they both do in alternate realities (or, more specifically, that this leads to reality splitting).

The only spiritual-sounding interpretation of PSIS I can think of is the idea that these experiences are the result of high-level entities such as guardians, angels or trickster djinns who used your LSD state to teach you a lesson in an unconventional way. The first quote (the one written by Reddit user I_DID_LSD_ON_A_PLANE) seems to point in this direction, where the so-called Karma God is apparently inducing a PSIS experience and using it to illustrate the idea that we are all one (i.e. Open Individualism). Furthermore, the experience viscerally portrays the way that this knowledge should impact our feelings of self-importance (by creating a profound feeling of sonder). This way, the tripper may develop a lasting need to work towards peace, wisdom and enlightenment for the benefit of all sentient beings.

Life as a learning experience is a common trope among spiritual worldviews. It is likely that the spiritual interpretations that emerge in a state of psychedelic depersonalization and derealization will depend on one’s pre-existing ideas of what is possible. The atonement of one’s sins, becoming aware of one’s karma, feeling our past lives, realizing emptiness, hearing a dire mystical warning, etc. are all ideas that already exist in human culture. In an attempt to make sense- any sense- of the kind of qualia experienced in high doses of psychedelics, our minds may be forced to instantiate grandiose delusions drawn from one’s reservoir of far-out ideas.

On a super intense psychedelic experience in which one’s self-models fail dramatically and one experiences fear of ego dissolution, interpreting what is happening as the result of the Karma God judging you and then giving you another chance at life can viscerally seem to make a lot of sense at the time.

The Quantum Hypothesis

For the sake of transparency I must say that we currently do not have a derivation of PSIS from first principles. In other words, we have not yet found a way to use the postulates of quantum mechanics to account for PSIS (that is, assuming that the cognitive and spiritual hypothesis are not the case). That said, there are indeed some things to be said here: While a theory is missing, we can at least talk about what a quantum mechanical account of PSIS would have to look like. I.e. we can at least make sense of some of the features that the theory would need to have to predict that people on LSD would be able to see the superposition of macroscopic branches of the multiverse.

Why would being on acid allow you to receive input from macroscopic environments that have already decohered? How could taking LSD possibly prevent the so-called collapse of the wavefunction? You might think: “well, why even think about it? It’s simply impossible because the collapse of the wavefunction is an axiom of quantum mechanics and we know it is true because some of the predictions made by quantum mechanics (such as QED) are in agreement with experimental data up to the 12th decimal point.” Before jumping to this conclusion, though, let us remember that there are several formulations of quantum mechanics. Both the Born rule (which determines the probability of seeing different outcomes from a given quantum measurement) and the collapse of the wavefunction (i.e. that any quantum state other than the one that was measured disappears) are indeed axiomatic for some formulations. But other formulations actually derive these features and don’t consider them fundamental. Here is Sean Carroll explaining the usual postulates that are used to teach quantum mechanics to undergraduate audiences:

The status of the Born Rule depends greatly on one’s preferred formulation of quantum mechanics. When we teach quantum mechanics to undergraduate physics majors, we generally give them a list of postulates that goes something like this:

  1. Quantum states are represented by wave functions, which are vectors in a mathematical space called Hilbert space.
  2. Wave functions evolve in time according to the Schrödinger equation.
  3. The act of measuring a quantum system returns a number, known as the eigenvalue of the quantity being measured.
  4. The probability of getting any particular eigenvalue is equal to the square of the amplitude for that eigenvalue.
  5. After the measurement is performed, the wave function “collapses” to a new state in which the wave function is localized precisely on the observed eigenvalue (as opposed to being in a superposition of many different possibilities).

In contrast, here is what you need to specify for the Everett (Multiple Worlds) formulation of quantum mechanics:

  1. Quantum states are represented by wave functions, which are vectors in a mathematical space called Hilbert space.
  2. Wave functions evolve in time according to the Schrödinger equation.

And that’s it. As you can see this formulation does not employ any collapse of the wavefunction, and neither does it consider the Born rule as a fundamental law. Instead, the wavefunction is thought to merely seem to collapse upon measurement (which is achieved by nearly diagonalizing its components along the basis of the measurement; strictly speaking, neighboring branches never truly stop interacting, but the relevance of their interaction approaches zero very quickly). Here the Born rule is derived from first principles rather than conceived as an axiom. How exactly one can derive the Born rule is a matter of controversy, however. Currently, two very promising theoretical approaches to do so are Quantum Darwinism and the so-called Epistemic Separability Principle (ESP for short, a technical physics term not to be confused with Extra Sensory Perception). Although these approaches to deriving the Born rule are considered serious contenders for a final explanation (and they are not mutually exclusive), they have been criticized for being somewhat circular. The physics community is far from having a consensus on whether these approaches truly succeed.

Is there any alternative to either axiomatizing or deriving the apparent collapse and the Born rule? Yes, there is an alternative: we can think of them as regularities contingent upon certain conditions that are always (or almost always) met in our sphere of experience, but that are not a universal fact about quantum mechanics. Macroscopic decoherence and Born rule probability assignments work very well in our everyday lives, but they may not hold universally. In particular -and this is a natural idea to have under any view that links consciousness and quantum mechanics- one could postulate that one’s state of consciousness influences the mind-body interaction in such a way that information from one’s quantum environment seeps into one’s mind in a different way.

Don’t get me wrong; I am aware that the Born rule has been experimentally verified with extreme precision. I only ask that you bear in mind that many scientific breakthroughs share a simple form: they question the constancy of certain physical properties. For example, Einstein’s theory of special relativity worked out the implications of the fact that the speed of light is observer-independent. In turn this makes the passage of time of external systems observer-dependent. Scientists had a hard time believing Einstein when he arrived at the conclusion that accelerating our frame of reference to extremely high velocities could dilate time. What was thought to be a constant (the passage of time throughout the universe) turned out to be an artifact of the fact that we rarely travel fast enough to notice any deviation from Newton’s laws of motion. In other words, our previous understanding was flawed because it assumed that certain observations did not break down in extreme conditions. Likewise, maybe we have been accidentally ignoring a whole set of physically relevant extreme conditions: altered states of consciousness. The apparent wavefunction collapse and the Born rule may be perfectly constant in our everyday frame of reference, and yet variable across the state-space of possible conscious experiences. If this were the case, we’d finally understand why it seems so hard to derive the Born rule from first principles: it’s impossible.

Succinctly, the Quantum Hypothesis is that psychedelic experiences modify the way one’s mind interacts with its quantum environment in such a way that the world does not appear to decohere any longer from one’s point of view. Our ignorance about the non-universality of the apparent collapse of the wavefunction is just a side effect of the fact that physicists do not usually perform experiments during intense life-changing entheogenic mind journeys. But for science, today we will.

Deriving PSIS with Quantum Mechanics

Here we present a rough (incomplete) sketch of what a possible derivation of PSIS from quantum mechanics might look like. To do so we need three background assumptions: First, conscious experiences must be macroscopic quantum coherent objects (i.e. ontologically unitary subsets of the universal wavefunction, akin to super-fluid helium or Bose–Einstein condensates, except at room temperature). Second, people’s decision-making process must somehow amplify low-level quantum randomness into macroscopic history bifurcations. And third, the properties of our quantum environment* are in part the result of the quantum state of our mind, which psychedelics can help modify. This third assumption brings into play the idea that if our mind is more coherent (e.g. is in a super-symmetrical state) it will select for wavefunctions in its environment that themselves are more coherent. In turn, the apparent lifespan of superpositions may be elongated long enough so that the quantum environment of one’s mind receives records from both Y-sitting and Y-standing as they are overlapping. Now, how credible are these three assumptions?

That events of experience are macroscopic quantum coherent objects is an explanation space usually perceived as pseudo-scientific, though a sizable number of extremely bright scientists and philosophers do entertain the idea very seriously. Contrary to popular belief, there are legitimate reasons to connect quantum computing and consciousness. The reasons for making this connection include the possibility of explaining the causal efficacy of consciousness, finding an answer to the palette problem with quantum fields and solving the phenomenal binding problem with quantum coherence and panpsychism.

The second assumption claims that people around you work as quantum Random Number Generators. That human decision-making amplifies low-level quantum randomness is thought to be likely by at least some scientists, though the time-scale on which this happens is still up for debate. The brain’s decision-making is chaotic, and over the span of seconds it may amplify quantum fluctuations into macroscopic differences. Thus, people around you making decisions may result in splitting universes (e.g. “[I] am watching alternate timelines branch off every time someone does something specific.” – GatorAutomator’s quote above). Presumably, this assumption would also imply that during PSIS not only people but also physics experiments would lead to apparent macroscopic superposition.

With regards to the third assumption: widespread microscopic decoherence is not, apparently, a necessary consequence of the postulates of quantum mechanics. Rather, it is a very specific outcome of (a) our universe’s Hamiltonian and (b) the starting conditions of our universe, i.e. Pre-Inflation/Eternal Inflation/Big Bang. (A Ney & D Albert, 2013). In principle, psychedelics may influence the part of the Hamiltonian that matters for the evolution of our mind’s wavefunction and its local interactions. In turn, this may modify the decoherence patterns of our consciousness with its local environment and- perhaps- ultimately the surrounding macroscopic world. Of course we do not know if this is possible, and I would have to agree that it is extremely far-fetched.

The overall picture that would emerge from these three assumptions would take the following form: both the mental content and raw phenomenal character of our states of consciousness are the result of the quantum micro-structure of our brains. By modifying this micro-structure, one is not only altering the selection pressures that give rise to fully formed experiences (i.e. quantum darwinism applied to the compositionality of quantum fields) but also altering the selection pressures that determine which parts of the universal wave-function we are entangled with (i.e. quantum darwinism applied to the interactions between coherent objects). Thus psychedelics may not only influence how our experience is shaped within, but also how it interacts with the quantum environment that surrounds it. Some mild psychedelic states (e.g. MDMA) may influence mostly the inner degrees of freedom of one’s mind, while other more intense states (e.g. DMT) may be the result of severe changes to the entanglement selection pressures and thus result in the apparent disconnection between one’s mind and one’s local environment. Here PSIS would be the result of decreasing the rate at which our mind decoheres (possibly by increasing the degree to which our mind is in a state of quantum confinement). In turn, by boosting one’s own inner degree of quantum superposition one may also broaden the degree of superposition acceptable at the interface with one’s quantum environment. One could now readily take in packets of information that have a wider degree of superposition. In the right circumstances, this may result in one’s mind experiencing information seemingly coming from alternate branches of the multiverse. In other words, the trick to PSIS both in the Quantum and the Spiritual Hypothesis is the same (though for different reasons): travel to other dimensions by being the change that you wish to see in the world. You need to increase your own degree of quantum coherence so that you become able of interacting with a more coherent quantum environment.

If this were the case it would call for a conceptual revolution. We would stop thinking of “our universe” as the “place we inhabit” and instead think of it as “the quantum environment our consciousness can interface with”. Perhaps life as we know it exists in a quantum niche that can give rise to embodied biological beings, but the universal wavefunction also contains alien state-spaces in which the patterns of entanglement work in different ways and lead to different evolutionary histories. Our minds evolved to interact with the particular environment that leads to the sustenance of our everyday macroscopic world. Our consciousness is shaped to fit in here, and it has a symbiotic relationship with the body that hosts it.

Now, how do we go about testing the quantum hypothesis? Following previous work on psychophysics for psychedelic research, the main idea will be to show that people on psychedelics can achieve something that sober individuals can’t. In this case, the task is designed to be impossible if one is only receiving input from one world (as predicted by the cognitive hypothesis). Only if PSIS is the result of simultaneously experiencing input from the several nearby branches that correspond to different outcomes of quantum events can the task be successfully performed.

The Experiment


The task in question involves three white little squares on a black background. At each time step (~16 milliseconds) the squares move one pixel down, and some number of pixels horizontally. While the downward movement is constant, the change in the horizontal direction is chosen randomly. The horizontal movement of one of the squares is being randomized on the fly with a Quantum Random Number Generator (qRNG**). We call this the quantum square. The horizontal movement of the other two squares follows a path that was determined at the beginning of each trial by a Classical Random Number Generator (cRNG***). We will refer to these two squares as the classical squares. The task for the participant is to identify which of the squares is the “odd one out” (i.e. the quantum square). Here is an infographic that explains the experimental setup:


In order to make the horizontal movement seem very chaotic and, at times even dramatic, it is randomized by choosing a horizontal change of either -4, -1, 1 or 4 pixels. By using a jump of 4 pixels (rather than e.g. 2) the squares sometimes seem to advance very quickly in one direction. The rationale for doing this was to make the predicted effect (assuming the Quantum Hypothesis) more noticeable – we want low probability paths that are extremely different than the typical path and in turn make it easy to distinguish between single paths and the superposition of many of the possible paths at once.

Critically, the starting position of the quantum square (whether left, center or right) for each of the trials during an experiment is obtained with a cRNG at the very beginning of the experiment. Why not use a qRNG? If we randomized the position of the quantum square using a qRNG we might be sabotaging ourselves: participants might see a superposition of classical and quantum squares on each of the three positions and thus not see any square behave differently than the other two. Thus, for the experiment to work (under the assumption that PSIS is explained by the Quantum Hypothesis) the only aspect that should be randomized using a qRNG is the horizontal displacement of the quantum square.

Since the numbers coming from the qRNG need to be requested over the internet, it is also crucial that we try to obtain fresh quantum random numbers as frequently as possible without messing up the flow of the experiment. Thus, a request for new quantum random numbers is made as often as possible (usually every 1 to 2 seconds). Since internet connections, server response time and network reliability varies from moment to moment, it is inevitable that once in a while we will have to stop the downward movement of all three squares as we wait for the next batch of random numbers. In principle this should have no negative effect at all in the ability to choose the odd-one-out. This problem merely interrupts the flow of the experiment, but it should not invalidate the results. We kept track of the number of trials that were interrupted for this reason (which you will find recorded in the “Network” column of the table of results down below).

Participants were instructed to:

  1. Clone the Github code for the experiment, and to open it up in a Chrome browser.
  2. Make sure that they have a reliable high-speed internet connection.
  3. Record the pertinent facts (time of experiment, substance, dose and time of intake).
  4. Use a sensorially-minimized room for the experiment (such as a closed bathroom), turn off the lights, ask anyone who is in the house to be quiet and mind their own business, and wear earplugs.
  5. Meditate for a couple of minutes before conducting the experiment and to try to induce PSIS voluntarily.
  6. Choose the odd-one-out (i.e. the quantum square) on each trial, for a total of at least 30 trials.

Under this setup, what should the participants expect to see if the Quantum Hypothesis is correct?

Predicted Psychedelic Perception

The pictures above illustrate how the experiment looks like to three different observers. On the left we see what a sober participant would see at a given time during a trial. The squares are moving downwards but since we are taking a snapshot in time they seem to be static. The image on the center shows what we would expect out of a generic psychedelic experience. In brief, the typical control interrupt action of psychedelics (i.e. tracers) effectively allows you to see more clearly the path that the squares have traced over time, but it would not allow you to notice any difference between the classic and quantum squares. The image on the right shows what we would expect to see if the Quantum Hypothesis is correct and PSIS is actually about perceiving nearby branches of the Everett multiverse. Notice how the center square is visibly different than the other two: it consists of the superposition of many alternative paths the square took in slightly different branches.

Implications of a Positive Result: Quantum Mind, Everett Rescue Missions and Psychedelic Cryptography

It is worth noting that if one can indeed reliably distinguish between the quantum and the classical squares, then this would have far-reaching implications. It would indeed confirm that our minds are macroscopic quantum coherent objects and that psychedelics influence their pattern of interactions with their surrounding quantum environment. It would also provide strong evidence in favor of the Everett interpretation of quantum mechanics (in which all possibilities are realized). More so, we would not only have a new perspective on the fundamental nature of the universe and the mind, but the discovery would just as well suggest some concrete applications. Looking far ahead, a positive outcome is that this knowledge would encourage research on the possible ways to achieve inter-dimensional travel, and in turn instantiate pan-Everettian rescue missions to reduce suffering elsewhere in the multiverse. The despair of confirming that the quantum multiverse is real might be evened out by the hope of finally being able to help sentient beings trapped in Darwinian environments in other branches of the universal wavefunction. Looking much closer to home, a positive result would lead to a breakthrough in psychedelic cryptography (PsyCrypto for short), where spies high on LSD would obtain the ability to read information that is secretly encoded in public light displays. More so, this particular kind of PsyCrypto would be impervious to discovery after the fact. Even if given an arbitrary amount of time and resources to analyze a video recording of the event, it would not be possible to determine which of the squares was being guided by quantum randomness. Unlike other PsyCrypto techniques, this one cannot be decoded by applying psychedelic replication software to video recordings of the transmission.


Three persons participated in the experiments: S (self), A, and B. [A and B are anonymous volunteers; for more information read the legal disclaimer at the end of this article]. Participant S (me) tried the experiment both sober and after drinking 2 beers. Participant A tried the experiment sober, on LSD, 2C-B and a combination of the two. And participant B tried the experiment both sober and on DMT. The total number of trials recorded for each of the conditions is: 90 for the sober state, 275 for 2C-B, 60 for DMT, 120 for LSD and 130 for the LSD/2C-B combo. The overall summary of the results is: chance level performance outcomes for all conditions. You can find the breakdown of results for all experiments in the table shown below, and you can download the raw csv file from the Github repository.

Columns from left to right: Date, State (of consciousness), Dose(s), T (time), #Trials (number of trials), Correct (number of trials in which the participant made the correct choice), Percent correct (100*Correct/Trials), Participants (S=Self, A/B=anonymous volunteers), Requests / Second (server requests per second), Network (this tracks the number of times that a trial was temporarily paused while the browser was waiting for the next batch of quantum random numbers), Notes (by default the squares left a dim trail behind them and this was removed in two trials; by default the squares were 10×10 pixels in size, but a smaller size was used in some trials).

I thought about visualizing the results in a cool graph at first, but after I received them I realized that it would be pointless. Not a single experiment reached a statistically significant deviation from chance level; who is interested in seeing a bunch of bars representing chance-level outcomes? Null results are always boring to visualize.****

In addition to the overall performance in the task, I also wanted to hear the following qualitative assessment from the participants: did they notice any difference between the three squares? Was there any feeling that one of them was behaving differently than the other two? This is what they responded when I asked them: “I could never see any difference between the squares, so it felt like I was making random choices” (from A) and “DMT made the screen look like a hyper-dimensional tunnel and I felt like strange entities were watching over me as I was doing the experiment, and even though the color of the squares would fluctuate randomly, I never noticed a single square behaving differently than the other two. All three seemed unique. I did feel that the squares were being controlled by some entity, as if with an agency of their own, but I figured that was made up by my mind.” When I asked them if they noticed anything similar to the image labeled Psychedelic view as predicted by the Quantum Hypothesis (as shown above) they both said “no”.


It is noteworthy that neither participant reported an experience of PSIS during the experiments. Even without an explicit and noticeable input superposition, PSIS may turn out to be a continuum rather than a discrete either-or phenomenon. If so, we might still expect to see some deviations from chance. This may be analogous to how in blindsight people report not being able to see anything and yet perform better than chance in visual recognition tasks. That said, the effect size of blindsight and other psychological effects in which information is processed unbeknownst to the participant tend to be very small. Thus, in order to confirm that quantum PSIS is happening below the threshold of awareness we may require a much larger number of samples (though still a lot smaller than what we would need if we were aiming to use the experiment to conduct Psi research with or without psychedelics, again, due to the extremely small effect sizes).

Why did the experiment fail? The first possibility is that it could be that the Quantum Hypothesis is simply wrong (and possibly because it requires false assumptions to work). Second, perhaps we were simply unlucky that PSIS was not triggered during the experiments; perhaps the set, setting, and dosages used simply failed to produce the desired effect (even if the state does indeed exist out there). And third, the experiment itself may be wrong: the second-long delays between the server requests and the qRNG may be too large to produce the effect. In the current implementation (and taking into account network delays), the average delay between the moment the quantum measurement was conducted and the moment it appeared on the computer screen as horizontal movement was .9 seconds (usually in the range of .4 to 1.4 seconds, given an average of 1/2 second lag due to the number buffering and 400 milliseconds in network time). This problem would be easily sidestepped if we used an on-site qRNG obtained from hardware directly connected to the computer (as is common in psi research). To minimize the delay even further, the outcomes of the quantum measurements could be delivered directly to your brain via neuroimplants.


If psychedelic experiences do make you interact with other realities, I would like to know about it with a high degree of certainty. The present study was admittedly a very long shot. But to my judgement, it was totally worth it. As Bayesians, we reasoned that since the Quantum Hypothesis can lead to a positive result for the experiment but the Cognitive Hypothesis can’t, then a positive result should make us update our probabilities of the Quantum Hypothesis a great deal. A negative result should make us update our probabilities in the opposite direction. That said, the probability should still not go to zero since the negative result could still be accounted for by the fact that participants failed to experience PSIS, and/or that the delay between the quantum measurement and the moment it influences the movement of the square in the screen is too large. Future studies should try to minimize these two possible sources of failure. First, by researching methods to reliably induce PSIS. And second, by minimizing the delay between branching and sensory input.

In the meantime, we can at least tentatively conclude that something along the lines of the Cognitive Hypothesis is the most likely case. In this light, PSIS turns out to be the result of a failure to inhibit predictions. Despite losing their status as suspected inter-dimensional portal technology, psychedelics still remain a crucial tool for qualia research. They can help us map out the state-space of possible experiences, allow us to identify the computational properties of consciousness, and maybe even allow us to reverse engineer the fundamental nature of valence.

[Legal Disclaimer]: Both participants A and B contacted me some time ago, soon after the Qualia Computing article How to Secretly Communicate with People on LSD made it to the front page of Hacker News and was linked by SlateStarCodex. They are both experienced users of psychedelics who take them about once a month. They expressed their interest in performing the psychophysics experiments I designed, and to do so while under the influence of psychedelic drugs. I do not know these individuals personally (nor do I know their real names, locations or even their genders). I have never encouraged these individuals to take psychedelic substances and I never gave them any compensation for their participation in the experiment. They told me that they take psychedelics regularly no matter what, and that my experiments would not be the primary reason for taking them. I never asked them to take any particular substance, either. They just said “I will take substance X on day Y, can I have some experiment for that?” I have no way of knowing (1) if the substances they claim they take are actually what they think they are, (2) whether the dosages are accurately measured, and (3) whether the data they provided is accurate and isn’t manipulated. That said, they did explain that they have tested their materials with chemical reagents, and are experienced enough to tell the difference between similar substances. Since there is no way to verify these claims without compromising their anonymity, please take the data with a grain of salt.

* In this case, the immediate environment would actually refer to the quantum degrees of freedom surrounding our consciousness within our brain, not the macroscopic exterior vicinity such as the chair we are sitting on or the friends we are hanging out with. In this picture, our interaction with that vicinity is actually mediated by many layers of indirection.

** The experiment used the Australian National University Quantum Random Numbers Server. By calling their API every 1 to 2 seconds we obtain truly random numbers that feed the x-displacement of the quantum square. This is an inexpensive and readily-available way to magnify decoherence events into macroscopic splitting histories in the comfort of your own home.

*** In this case, Javascript’s Math.random() function. Unfortunately the RGN algorithm varies from browser to browser. It may be worthwhile to go for a browser-independent implementation in the future to guarantee a uniform high quality source of classical randomness.

**** As calculated with a single tailed binomial test with null probability equal to 1/3. The threshold of statistical significance at the p < 0.05 level is found at 15/30 and for p < 0.001 we need at least 19/30 correct responses. The best score that any participant managed to obtain was 14/30.