Why I think the Foundational Research Institute should rethink its approach

by Mike Johnson

The following is my considered evaluation of the Foundational Research Institute, circa July 2017. I discuss its goal, where I foresee things going wrong with how it defines suffering, and what it could do to avoid these problems.

TL;DR version: functionalism (“consciousness is the sum-total of the functional properties of our brains”) sounds a lot better than it actually turns out to be in practice. In particular, functionalism makes it impossible to define ethics & suffering in a way that can mediate disagreements.

I. What is the Foundational Research Institute?

The Foundational Research Institute (FRI) is a Berlin-based group that “conducts research on how to best reduce the suffering of sentient beings in the near and far future.” Executive Director Max Daniel introduced them at EA Global Boston as “the only EA organization which at an organizational level has the mission of focusing on reducing s-risk.” S-risks are, according to Daniel, “risks where an adverse outcome would bring about suffering on an astronomical scale, vastly exceeding all suffering that has existed on Earth so far.”

Essentially, FRI wants to become the research arm of suffering-focused ethics, and help prevent artificial general intelligence (AGI) failure-modes which might produce suffering on a cosmic scale.

What I like about FRI:

While I have serious qualms about FRI’s research framework, I think the people behind FRI deserve a lot of credit- they seem to be serious people, working hard to build something good. In particular, I want to give them a shoutout for three things:

  • First, FRI takes suffering seriously, and I think that’s important. When times are good, we tend to forget how tongue-chewingly horrific suffering can be. S-risks seem particularly horrifying.
  • Second, FRI isn’t afraid of being weird. FRI has been working on s-risk research for a few years now, and if people are starting to come around to the idea that s-risks are worth thinking about, much of the credit goes to FRI.
  • Third, I have great personal respect for Brian Tomasik, one of FRI’s co-founders. I’ve found him highly thoughtful, generous in debates, and unfailingly principled. In particular, he’s always willing to bite the bullet and work ideas out to their logical end, even if it involves repugnant conclusions.

What is FRI’s research framework?

FRI believes in analytic functionalism, or what David Chalmers calls “Type-A materialism”. Essentially, what this means is there’s no ’theoretical essence’ to consciousness; rather, consciousness is the sum-total of the functional properties of our brains. Since ‘functional properties’ are rather vague, this means consciousness itself is rather vague, in the same way words like “life,” “justice,” and “virtue” are messy and vague.

Brian suggests that this vagueness means there’s an inherently subjective, perhaps arbitrary element to how we define consciousness:

Analytic functionalism looks for functional processes in the brain that roughly capture what we mean by words like “awareness”, “happy”, etc., in a similar way as a biologist may look for precise properties of replicators that roughly capture what we mean by “life”. Just as there can be room for fuzziness about where exactly to draw the boundaries around “life”, different analytic functionalists may have different opinions about where to define the boundaries of “consciousness” and other mental states. This is why consciousness is “up to us to define”. There’s no hard problem of consciousness for the same reason there’s no hard problem of life: consciousness is just a high-level word that we use to refer to lots of detailed processes, and it doesn’t mean anything in addition to those processes.

Finally, Brian argues that the phenomenology of consciousness is identical with the phenomenology of computation:

I know that I’m conscious. I also know, from neuroscience combined with Occam’s razor, that my consciousness consists only of material operations in my brain — probably mostly patterns of neuronal firing that help process inputs, compute intermediate ideas, and produce behavioral outputs. Thus, I can see that consciousness is just the first-person view of certain kinds of computations — as Eliezer Yudkowsky puts it, “How An Algorithm Feels From Inside“. Consciousness is not something separate from or epiphenomenal to these computations. It is these computations, just from their own perspective of trying to think about themselves.

In other words, consciousness is what minds compute. Consciousness is the collection of input operations, intermediate processing, and output behaviors that an entity performs.

And if consciousness is all these things, so too is suffering. Which means suffering is computational, yet also inherently fuzzy, and at least a bit arbitrary; a leaky high-level reification impossible to speak about accurately, since there’s no formal, objective “ground truth”.

II. Why do I worry about FRI’s research framework?

In short, I think FRI has a worthy goal and good people, but its metaphysics actively prevent making progress toward that goal. The following describes why I think that, drawing heavily on Brian’s writings (of FRI’s researchers, Brian seems the most focused on metaphysics):

Note: FRI is not the only EA organization which holds functionalist views on consciousness; much of the following critique would also apply to e.g. MIRI, FHI, and OpenPhil. I focus on FRI because (1) Brian’s writings on consciousness & functionalism have been hugely influential in the community, and are clear enough *to* criticize; (2) the fact that FRI is particularly clear about what it cares about- suffering- allows a particularly clear critique about what problems it will run into with functionalism; (3) I believe FRI is at the forefront of an important cause area which has not crystallized yet, and I think it’s critically important to get these objections bouncing around this subcommunity.

Objection 1: Motte-and-bailey

Brian: “Consciousness is not a thing which exists ‘out there’ or even a separate property of matter; it’s a definitional category into which we classify minds. ‘Is this digital mind really conscious?’ is analogous to ‘Is a rock that people use to eat on really a table?’ [However,] That consciousness is a cluster in thingspace rather than a concrete property of the world does not make reducing suffering less important.”

The FRI model seems to imply that suffering is ineffable enough such that we can’t have an objective definition, yet sufficiently effable that we can coherently talk and care about it. This attempt to have it both ways seems contradictory, or at least in deep tension.

Indeed, I’d argue that the degree to which you can care about something is proportional to the degree to which you can define it objectively. E.g., If I say that “gnireffus” is literally the most terrible thing in the cosmos, that we should spread gnireffus-focused ethics, and that minimizing g-risks (far-future scenarios which involve large amounts of gnireffus) is a moral imperative, but also that what is and what and isn’t gnireffus is rather subjective with no privileged definition, and it’s impossible to objectively tell if a physical system exhibits gnireffus, you might raise any number of objections. This is not an exact metaphor for FRI’s position, but I worry that FRI’s work leans on the intuition that suffering is real and we can speak coherently about it, to a degree greater than its metaphysics formally allow.

Max Daniel (personal communication) suggests that we’re comfortable with a degree of ineffability in other contexts; “Brian claims that the concept of suffering shares the allegedly problematic properties with the concept of a table. But it seems a stretch to say that the alleged tension is problematic when talking about tables. So why would it be problematic when talking about suffering?” However, if we take the anti-realist view that suffering is ‘merely’ a node in the network of language, we have to live with the consequences of this: that ‘suffering’ will lose meaning as we take it away from the network in which it’s embedded (Wittgenstein). But FRI wants to do exactly this, to speak about suffering in the context of AGIs, simulated brains, even video game characters.

We can be anti-realists about suffering (suffering-is-a-node-in-the-network-of-language), or we can argue that we can talk coherently about suffering in novel contexts (AGIs, mind crime, aliens, and so on), but it seems inherently troublesome to claim we can do both at the same time.

Objection 2: Intuition duels

Two people can agree on FRI’s position that there is no objective fact of the matter about what suffering is (no privileged definition), but this also means they have no way of coming to any consensus on the object-level question of whether something can suffer. This isn’t just an academic point: Brian has written extensively about how he believes non-human animals can and do suffer extensively, whereas Yudkowsky (who holds computationalist views, like Brian) has written about how he’s confident that animals are not conscious and cannot suffer, due to their lack of higher-order reasoning.

And if functionalism is having trouble adjudicating the easy cases of suffering–whether monkeys can suffer, or whether dogs can— it doesn’t have a sliver of a chance at dealing with the upcoming hard cases of suffering: whether a given AGI is suffering, or engaging in mind crime; whether a whole-brain emulation (WBE) or synthetic organism or emergent intelligence that doesn’t have the capacity to tell us how it feels (or that we don’t have the capacity to understand) is suffering; if any aliens that we meet in the future can suffer; whether changing the internal architecture of our qualia reports means we’re also changing our qualia; and so on.

In short, FRI’s theory of consciousness isn’t actually a theory of consciousness at all, since it doesn’t do the thing we need a theory of consciousness to do: adjudicate disagreements in a principled way. Instead, it gives up any claim on the sorts of objective facts which could in principle adjudicate disagreements.

This is a source of friction in EA today, but it’s mitigated by the sense that

(1) The EA pie is growing, so it’s better to ignore disagreements than pick fights;

(2) Disagreements over the definition of suffering don’t really matter yet, since we haven’t gotten into the business of making morally-relevant synthetic beings (that we know of) that might be unable to vocalize their suffering.

If the perception of one or both of these conditions change, the lack of some disagreement-adjudicating theory of suffering will matter quite a lot.

Objection 3: Convergence requires common truth

Mike: “[W]hat makes one definition of consciousness better than another? How should we evaluate them?”

Brian: “Consilience among our feelings of empathy, principles of non-discrimination, understandings of cognitive science, etc. It’s similar to the question of what makes one definition of justice or virtue better than another.”

Brian is hoping that affective neuroscience will slowly converge to accurate views on suffering as more and better data about sentience and pain accumulates. But convergence to truth implies something (objective) driving the convergence- in this way, Brian’s framework still seems to require an objective truth of the matter, even though he disclaims most of the benefits of assuming this.

Objection 4: Assuming that consciousness is a reification produces more confusion, not less

Brian: “Consciousness is not a reified thing; it’s not a physical property of the universe that just exists intrinsically. Rather, instances of consciousness are algorithms that are implemented in specific steps. … Consciousness involves specific things that brains do.”

Brian argues that we treat conscious/phenomenology as more ‘real’ than it is. Traditionally, whenever we’ve discovered something is a leaky reification and shouldn’t be treated as ‘too real’, we’ve been able to break it down into more coherent constituent pieces we can treat as real. Life, for instance, wasn’t due to élan vital but a bundle of self-organizing properties & dynamics which generally co-occur. But carrying out this “de-reification” process on consciousness– enumerating its coherent constituent pieces– has proven difficult, especially if we want to preserve some way to speak cogently about suffering.

Speaking for myself, the more I stared into the depths of functionalism, the less certain everything about moral value became– and arguably, I see the same trajectory in Brian’s work and Luke Muehlhauser’s report. Their model uncertainty has seemingly become larger as they’ve looked into techniques for how to “de-reify” consciousness while preserving some flavor of moral value, not smaller. Brian and Luke seem to interpret this as evidence that moral value is intractably complicated, but this is also consistent with consciousness not being a reification, and instead being a real thing. Trying to “de-reify” something that’s not a reification will produce deep confusion, just as surely trying to treat a reification as ‘more real’ than it actually is will.

Edsger W. Dijkstra famously noted that “The purpose of abstraction is not to be vague, but to create a new semantic level in which one can be absolutely precise.” And so if our ways of talking about moral value fail to ‘carve reality at the joints’- then by all means let’s build better ones, rather than giving up on precision.

Objection 5: The Hard Problem of Consciousness is a red herring

Brian spends a lot of time discussing Chalmers’ “Hard Problem of Consciousness”, i.e. the question of why we’re subjectively conscious, and seems to base at least part of his conclusion on not finding this question compelling— he suggests “There’s no hard problem of consciousness for the same reason there’s no hard problem of life: consciousness is just a high-level word that we use to refer to lots of detailed processes, and it doesn’t mean anything in addition to those processes.” I.e., no ‘why’ is necessary; when we take consciousness and subtract out the details of the brain, we’re left with an empty set.

But I think the “Hard Problem” isn’t helpful as a contrastive centerpiece, since it’s unclear what the problem is, and whether it’s analytic or empirical, a statement about cognition or about physics. At the Qualia Research Institute (QRI), we don’t talk much about the Hard Problem; instead, we talk about Qualia Formalism, or the idea that any phenomenological state can be crisply and precisely represented by some mathematical object. I suspect this would be a better foil for Brian’s work than the Hard Problem.

Objection 6: Mapping to reality

Brian argues that consciousness should be defined at the functional/computational level: given a Turing machine, or neural network, the right ‘code’ will produce consciousness. But the problem is that this doesn’t lead to a theory which can ‘compile’ to physics. Consider the following:

Imagine you have a bag of popcorn. Now shake it. There will exist a certain ad-hoc interpretation of bag-of-popcorn-as-computational-system where you just simulated someone getting tortured, and other interpretations that don’t imply that. Did you torture anyone? If you’re a computationalist, no clear answer exists- you both did, and did not, torture someone. This sounds like a ridiculous edge-case that would never come up in real life, but in reality it comes up all the time, since there is no principled way to *objectively derive* what computation(s) any physical system is performing.

I don’t think this is an outlandish view of functionalism; Brian suggests much the same in How to Interpret a Physical System as a Mind“Physicalist views that directly map from physics to moral value are relatively simple to understand. Functionalism is more complex, because it maps from physics to computations to moral value. Moreover, while physics is real and objective, computations are fictional and ‘observer-relative’ (to use John Searle’s terminology). There’s no objective meaning to ‘the computation that this physical system is implementing’ (unless you’re referring to the specific equations of physics that the system is playing out).”

Gordon McCabe (McCabe 2004) provides a more formal argument to this effect— that precisely mapping between physical processes and (Turing-level) computational processes is inherently impossible— in the context of simulations. First, McCabe notes that:

[T]here is a one-[to-]many correspondence between the logical states [of a computer] and the exact electronic states of computer memory. Although there are bijective mappings between numbers and the logical states of computer memory, there are no bijective mappings between numbers and the exact electronic states of memory.

This lack of an exact bijective mapping means that subjective interpretation necessarily creeps in, and so a computational simulation of a physical system can’t be ‘about’ that system in any rigorous way:

In a computer simulation, the values of the physical quantities possessed by the simulated system are represented by the combined states of multiple bits in computer memory. However, the combined states of multiple bits in computer memory only represent numbers because they are deemed to do so under a numeric interpretation. There are many different interpretations of the combined states of multiple bits in computer memory. If the numbers represented by a digital computer are interpretation-dependent, they cannot be objective physical properties. Hence, there can be no objective relationship between the changing pattern of multiple bit-states in computer memory, and the changing pattern of quantity-values of a simulated physical system.

McCabe concludes that, metaphysically speaking,

A digital computer simulation of a physical system cannot exist as, (does not possess the properties and relationships of), anything else other than a physical process occurring upon the components of a computer. In the contemporary case of an electronic digital computer, a simulation cannot exist as anything else other than an electronic physical process occurring upon the components and circuitry of a computer.

Where does this leave ethics? In Flavors of Computation Are Flavors of Consciousness, Brian notes that “In some sense all I’ve proposed here is to think of different flavors of computation as being various flavors of consciousness. But this still leaves the question: Which flavors of computation matter most? Clearly whatever computations happen when a person is in pain are vastly more important than what’s happening in a brain on a lazy afternoon. How can we capture that difference?”

But if Brian grants the former point- that “There’s no objective meaning to ‘the computation that this physical system is implementing’”– then this latter task of figuring out “which flavors of computation matter most” is provably impossible. There will always be multiple computational (and thus ethical) interpretations of a physical system, with no way to figure out what’s “really” happening. No way to figure out if something is suffering or not. No consilience; not now, not ever.

Note: despite apparently granting the point above, Brian also remarks that:

I should add a note on terminology: All computations occur within physics, so any computation is a physical process. Conversely, any physical process proceeds from input conditions to output conditions in a regular manner and so is a computation. Hence, the set of computations equals the set of physical processes, and where I say “computations” in this piece, one could just as well substitute “physical processes” instead.

This seems to be (1) incorrect, for the reasons I give above, or (2) taking substantial poetic license with these terms, or (3) referring to hypercomputation (which might be able to salvage the metaphor, but would invalidate many of FRI’s conclusions dealing with the computability of suffering on conventional hardware).

This objection may seem esoteric or pedantic, but I think it’s important, and that it ripples through FRI’s theoretical framework with disastrous effects.

 

Objection 7: FRI doesn’t fully bite the bullet on computationalism

Brian suggests that “flavors of computation are flavors of consciousness” and that some computations ‘code’ for suffering. But if we do in fact bite the bullet on this metaphor and place suffering within the realm of computational theory, we need to think in “near mode” and accept all the paradoxes that brings. Scott Aaronson, a noted expert on quantum computing, raises the following objections to functionalism:

I’m guessing that many people in this room side with Dennett, and (not coincidentally, I’d say) also with Everett. I certainly have sympathies in that direction too. In fact, I spent seven or eight years of my life as a Dennett/Everett hardcore believer. But, while I don’t want to talk anyone out of the Dennett/Everett view, I’d like to take you on a tour of what I see as some of the extremely interesting questions that that view leaves unanswered. I’m not talking about “deep questions of meaning,” but about something much more straightforward: what exactly does a computational process have to do to qualify as “conscious”?

There’s this old chestnut, what if each person on earth simulated one neuron of your brain, by passing pieces of paper around. It took them several years just to simulate a single second of your thought processes. Would that bring your subjectivity into being? Would you accept it as a replacement for your current body? If so, then what if your brain were simulated, not neuron-by-neuron, but by a gigantic lookup table? That is, what if there were a huge database, much larger than the observable universe (but let’s not worry about that), that hardwired what your brain’s response was to every sequence of stimuli that your sense-organs could possibly receive. Would that bring about your consciousness? Let’s keep pushing: if it would, would it make a difference if anyone actually consulted the lookup table? Why can’t it bring about your consciousness just by sitting there doing nothing?

To these standard thought experiments, we can add more. Let’s suppose that, purely for error-correction purposes, the computer that’s simulating your brain runs the code three times, and takes the majority vote of the outcomes. Would that bring three “copies” of your consciousness into being? Does it make a difference if the three copies are widely separated in space or time—say, on different planets, or in different centuries? Is it possible that the massive redundancy taking place in your brain right now is bringing multiple copies of you into being?

Maybe my favorite thought experiment along these lines was invented by my former student Andy Drucker.  In the past five years, there’s been a revolution in theoretical cryptography, around something called Fully Homomorphic Encryption (FHE), which was first discovered by Craig Gentry.  What FHE lets you do is to perform arbitrary computations on encrypted data, without ever decrypting the data at any point.  So, to someone with the decryption key, you could be proving theorems, simulating planetary motions, etc.  But to someone without the key, it looks for all the world like you’re just shuffling random strings and producing other random strings as output.

You can probably see where this is going.  What if we homomorphically encrypted a simulation of your brain?  And what if we hid the only copy of the decryption key, let’s say in another galaxy?  Would this computation—which looks to anyone in our galaxy like a reshuffling of gobbledygook—be silently producing your consciousness?

When we consider the possibility of a conscious quantum computer, in some sense we inherit all the previous puzzles about conscious classical computers, but then also add a few new ones.  So, let’s say I run a quantum subroutine that simulates your brain, by applying some unitary transformation U.  But then, of course, I want to “uncompute” to get rid of garbage (and thereby enable interference between different branches), so I apply U-1.  Question: when I apply U-1, does your simulated brain experience the same thoughts and feelings a second time?  Is the second experience “the same as” the first, or does it differ somehow, by virtue of being reversed in time? Or, since U-1U is just a convoluted implementation of the identity function, are there no experiences at all here?

Here’s a better one: many of you have heard of the Vaidman bomb.  This is a famous thought experiment in quantum mechanics where there’s a package, and we’d like to “query” it to find out whether it contains a bomb—but if we query it and there is a bomb, it will explode, killing everyone in the room.  What’s the solution?  Well, suppose we could go into a superposition of querying the bomb and not querying it, with only ε amplitude on querying the bomb, and √(1-ε2) amplitude on not querying it.  And suppose we repeat this over and over—each time, moving ε amplitude onto the “query the bomb” state if there’s no bomb there, but moving ε2 probability onto the “query the bomb” state if there is a bomb (since the explosion decoheres the superposition).  Then after 1/ε repetitions, we’ll have order 1 probability of being in the “query the bomb” state if there’s no bomb.  By contrast, if there is a bomb, then the total probability we’ve ever entered that state is (1/ε)×ε2 = ε.  So, either way, we learn whether there’s a bomb, and the probability that we set the bomb off can be made arbitrarily small.  (Incidentally, this is extremely closely related to how Grover’s algorithm works.)

OK, now how about the Vaidman brain?  We’ve got a quantum subroutine simulating your brain, and we want to ask it a yes-or-no question.  We do so by querying that subroutine with ε amplitude 1/ε times, in such a way that if your answer is “yes,” then we’ve only ever activated the subroutine with total probability ε.  Yet you still manage to communicate your “yes” answer to the outside world.  So, should we say that you were conscious only in the ε fraction of the wavefunction where the simulation happened, or that the entire system was conscious?  (The answer could matter a lot for anthropic purposes.)

To sum up: Brian’s notion that consciousness is the same as computation raises more issues than it solves; in particular, the possibility that if suffering is computable, it may also be uncomputable/reversible, would suggest s-risks aren’t as serious as FRI treats them.

Objection 8: Dangerous combination

Three themes which seem to permeate FRI’s research are:

(1) Suffering is the thing that is bad.

(2) It’s critically important to eliminate badness from the universe.

(3) Suffering is impossible to define objectively, and so we each must define what suffering means for ourselves.

Taken individually, each of these seems reasonable. Pick two, and you’re still okay. Pick all three, though, and you get A Fully General Justification For Anything, based on what is ultimately a subjective/aesthetic call.

Much can be said in FRI’s defense here, and it’s unfair to single them out as risky: in my experience they’ve always brought a very thoughtful, measured, cooperative approach to the table. I would just note that ideas are powerful, and I think theme (3) is especially pernicious if incorrect.

III. QRI’s alternative

Analytic functionalism is essentially a negative hypothesis about consciousness: it’s the argument that there’s no order to be found, no rigor to be had. It obscures this with talk of “function”, which is a red herring it not only doesn’t define, but admits is undefinable. It doesn’t make any positive assertion. Functionalism is skepticism- nothing more, nothing less.

But is it right?

Ultimately, I think these a priori arguments are much like people in the middle ages arguing whether one could ever formalize a Proper System of Alchemy. Such arguments may in many cases hold water, but it’s often difficult to tell good arguments apart from arguments where we’re just cleverly fooling ourselves. In retrospect, the best way to *prove* systematized alchemy was possible was to just go out and *do* it, and invent Chemistry. That’s how I see what we’re doing at QRI with Qualia Formalism: we’re assuming it’s possible to build stuff, and we’re working on building the object-level stuff.

What we’ve built with QRI’s framework

Note: this is a brief, surface-level tour of our research; it will probably be confusing for readers who haven’t dug into our stuff before. Consider this a down-payment on a more substantial introduction.

My most notable work is Principia Qualia, in which I lay out my meta-framework for consciousness (a flavor of dual-aspect monism, with a focus on Qualia Formalism) and put forth the Symmetry Theory of Valence (STV). Essentially, the STV is an argument that much of the apparent complexity of emotional valence is evolutionarily contingent, and if we consider a mathematical object isomorphic to a phenomenological experience, the mathematical property which corresponds to how pleasant it is to be that experience is the object’s symmetry. This implies a bunch of testable predictions and reinterpretations of things like what ‘pleasure centers’ do (Section XI; Section XII). Building on this, I offer the Symmetry Theory of Homeostatic Regulation, which suggests understanding the structure of qualia will translate into knowledge about the structure of human intelligence, and I briefly touch on the idea of Neuroacoustics.

Likewise, my colleague Andrés Gómez Emilsson has written about the likely mathematics of phenomenology, includingThe Hyperbolic Geometry of DMT Experiences, Tyranny of the Intentional Object, and Algorithmic Reduction of Psychedelic States. If I had to suggest one thing to read in all of these links, though, it would be the transcript of his recent talk on Quantifying Bliss, which lays out the world’s first method to objectively measure valence from first principles (via fMRI) using Selen Atasoy’s Connectome Harmonics framework, the Symmetry Theory of Valence, and Andrés’s CDNS model of experience.

These are risky predictions and we don’t yet know if they’re right, but we’re confident that if there is some elegant structure intrinsic to consciousness, as there is in many other parts of the natural world, these are the right kind of risks to take.

I mention all this because I think analytic functionalism- which is to say radical skepticism/eliminativism, the metaphysics of last resort- only looks as good as it does because nobody’s been building out any alternatives.

IV. Closing thoughts

FRI is pursuing a certain research agenda, and QRI is pursuing another, and there’s lots of value in independent explorations of the nature of suffering. I’m glad FRI exists, everybody I’ve interacted with at FRI has been great, I’m happy they’re focusing on s-risks, and I look forward to seeing what they produce in the future.

On the other hand, I worry that nobody’s pushing back on FRI’s metaphysics, which seem to unavoidably lead to the intractable problems I describe above. FRI seems to believe these problems are part of the territory, unavoidable messes that we just have to make philosophical peace with. But I think that functionalism is a bad map, that the metaphysical messes it leads to are much worse than most people realize (fatal to FRI’s mission), and there are other options that avoid these problems (which, to be fair, is not to say they have no problems).

Ultimately, FRI doesn’t owe me a defense of their position. But if they’re open to suggestions on what it would take to convince a skeptic like me that their brand of functionalism is viable, or at least rescuable, I’d offer the following:

Re: Objection 1 (motte-and-bailey), I suggest FRI should be as clear and complete as possible in their basic definition of suffering. In which particular ways is it ineffable/fuzzy, and in which particular ways is it precise? What can we definitely say about suffering, and what can we definitely never determine? Preregistering ontological commitments and methodological possibilities would help guard against FRI’s definition of suffering changing based on context.

Re: Objection 2 (intuition duels), FRI may want to internally “war game” various future scenarios involving AGI, WBE, etc, with one side arguing that a given synthetic (or even extraterrestrial) organism is suffering, and the other side arguing that it isn’t. I’d expect this would help diagnose what sorts of disagreements future theories of suffering will need to adjudicate, and perhaps illuminate implicit ethical intuitions. Sharing the results of these simulated disagreements would also be helpful in making FRI’s reasoning less opaque to outsiders, although making everything transparent could lead to certain strategic disadvantages.

Re: Objection 3 (convergence requires common truth), I’d like FRI to explore exactly what might drive consilience/convergence in theories of suffering, and what precisely makes one theory of suffering better than another, and ideally to evaluate a range of example theories of suffering under these criteria.

Re: Objection 4 (assuming that consciousness is a reification produces more confusion, not less), I would love to see a historical treatment of reification: lists of reifications which were later dissolved (e.g., élan vital), vs scattered phenomena that were later unified (e.g., electromagnetism). What patterns do the former have, vs the latter, and why might consciousness fit one of these buckets better than the other?

Re: Objection 5 (the Hard Problem of Consciousness is a red herring), I’d like to see a more detailed treatment of what kinds of problem people have interpreted the Hard Problem as, and also more analysis on the prospects of Qualia Formalism (which I think is the maximally-empirical, maximally-charitable interpretation of the Hard Problem). It would be helpful for us, in particular, if FRI preregistered their expectations about QRI’s predictions, and their view of the relative evidence strength of each of our predictions.

Re: Objection 6 (mapping to reality), this is perhaps the heart of most of our disagreement. From Brian’s quotes, he seems split on this issue; I’d like clarification about whether he believes we can ever precisely/objectively map specific computations to specific physical systems, and vice-versa. And if so— how? If not, this seems to propagate through FRI’s ethical framework in a disastrous way, since anyone can argue that any physical system does, or does not, ‘code’ for massive suffering, and there’s no principled way to derive any ‘ground truth’ or even pick between interpretations in a principled way (e.g. my popcorn example). If this isn’t the case— why not?

Brian has suggested that “certain high-level interpretations of physical systems are more ‘natural’ and useful than others” (personal communication); I agree, and would encourage FRI to explore systematizing this.

It would be non-trivial to port FRI’s theories and computational intuitions to the framework of “hypercomputation”– i.e., the understanding that there’s a formal hierarchy of computational systems, and that Turing machines are only one level of many– but it may have benefits too. Namely, it might be the only way they could avoid Objection 6 (which I think is a fatal objection) while still allowing them to speak about computation & consciousness in the same breath. I think FRI should look at this and see if it makes sense to them.

Re: Objection 7 (FRI doesn’t fully bite the bullet on computationalism), I’d like to see responses to Aaronson’s aforementioned thought experiments.

Re: Objection 8 (dangerous combination), I’d like to see a clarification about why my interpretation is unreasonable (as it very well may be!).

 


In conclusion- I think FRI has a critically important goal- reduction of suffering & s-risk. However, I also think FRI has painted itself into a corner by explicitly disallowing a clear, disagreement-mediating definition for what these things are. I look forward to further work in this field.

 

Mike Johnson

Qualia Research Institute


Acknowledgements: thanks to Andrés Gómez Emilsson, Brian Tomasik, and Max Daniel for reviewing earlier drafts of this.

Sources:

My sources for FRI’s views on consciousness:

Flavors of Computation are Flavors of Consciousness:

https://foundational-research.org/flavors-of-computation-are-flavors-of-consciousness/

 

Is There a Hard Problem of Consciousness?

http://reducing-suffering.org/hard-problem-consciousness/

Consciousness Is a Process, Not a Moment

http://reducing-suffering.org/consciousness-is-a-process-not-a-moment/

 

How to Interpret a Physical System as a Mind

http://reducing-suffering.org/interpret-physical-system-mind/

 

Dissolving Confusion about Consciousness

http://reducing-suffering.org/dissolving-confusion-about-consciousness/

 

Debate between Brian & Mike on consciousness:

https://www.facebook.com/groups/effective.altruists/permalink/1333798200009867/?comment_id=1333823816673972&comment_tracking=%7B%22tn%22%3A%22R9%22%7D

Max Daniel’s EA Global Boston 2017 talk on s-risks:
https://www.youtube.com/watch?v=jiZxEJcFExc
Multipolar debate between Eliezer Yudkowsky and various rationalists about animal suffering:
https://rationalconspiracy.com/2015/12/16/a-debate-on-animal-consciousness/
The Internet Encyclopedia of Philosophy on functionalism:
http://www.iep.utm.edu/functism/
Gordon McCabe on why computation doesn’t map to physics:
http://philsci-archive.pitt.edu/1891/1/UniverseCreationComputer.pdf
Toby Ord on hypercomputation, and how it differs from Turing’s work:
https://arxiv.org/abs/math/0209332
Luke Muehlhauser’s OpenPhil-funded report on consciousness and moral patienthood:
http://www.openphilanthropy.org/2017-report-consciousness-and-moral-patienthood
Scott Aaronson’s thought experiments on computationalism:
http://www.scottaaronson.com/blog/?p=1951
Selen Atasoy on Connectome Harmonics, a new way to understand brain activity:
https://www.nature.com/articles/ncomms10340
My work on formalizing phenomenology:

My meta-framework for consciousness, including the Symmetry Theory of Valence:

http://opentheory.net/PrincipiaQualia.pdf

 

My hypothesis of homeostatic regulation, which touches on why we seek out pleasure:

http://opentheory.net/2017/05/why-we-seek-out-pleasure-the-symmetry-theory-of-homeostatic-regulation/

 

My exploration & parametrization of the ‘neuroacoustics’ metaphor suggested by Atasoy’s work:

http://opentheory.net/2017/06/taking-brain-waves-seriously-neuroacoustics/

My colleague Andrés’s work on formalizing phenomenology:
A model of DMT-trip-as-hyperbolic-experience:
https://qualiacomputing.com/2017/05/28/eli5-the-hyperbolic-geometry-of-dmt-experiences/
June 2017 talk at Consciousness Hacking, describing a theory and experiment to predict people’s valence from fMRI data:
https://qualiacomputing.com/2017/06/18/quantifying-bliss-talk-summary/
A parametrization of various psychedelic states as operators in qualia space:
https://qualiacomputing.com/2016/06/20/algorithmic-reduction-of-psychedelic-states/
A brief post on valence and the fundamental attribution error:
https://qualiacomputing.com/2016/11/19/the-tyranny-of-the-intentional-object/
A summary of some of Selen Atasoy’s current work on Connectome Harmonics:
https://qualiacomputing.com/2017/06/18/connectome-specific-harmonic-waves-on-lsd/

Raising the Table Stakes for Successful Theories of Consciousness

What should we expect out of a theory of consciousness?

For a scientific theory of consciousness to have even the slightest chance at being correct it must be able to address- at the very least– the following four questions*:

  1. Why consciousness exists at all (i.e. “the hard problem“; why we are not p-zombies)
  2. How it is possible to experience multiple pieces of information at once in a unitary moment of experience (i.e. the phenomenal binding problem; the boundary problem)
  3. How consciousness exerts the causal power necessary to be recruited by natural selection and allow us to discuss its existence (i.e. the problem of causal impotence vs. causal overdetermination)
  4. How and why consciousness has its countless textures (e.g. phenomenal color, smell, emotions, etc.) and the interdependencies of their different values (i.e. the palette problem)

In addition the theory must be able to generate experimentally testable predictions. In Popper’s sense the theory must make “risky” predictions. In a Bayesian sense the theory must be able to generate predictions that have a much higher likelihood given that the theory is correct versus not so that the a posteriori probabilities of the different hypotheses are substantially different from their priors once the experiment is actually carried out.

As discussed in a previous article most contemporary philosophies of mind are unable to address one or more of these four problems (or simply fail to make any interesting predictions). David Pearce’s non-materialist physicalist idealism (not the schizophrenic word-salad that may seem at first) is one of the few theories that promises to meet this criteria and makes empirical predictions. This theory addresses the above questions in the following way:

(1) Why does consciousness exist?

Consciousness exists because reality is made of qualia. In particular, one might say that physics is implicitly the science that studies the flux of qualia. This would imply that in fact all that exists is a set of experiences whose interrelationships are encoded in the Universal Wavefunction of Quantum Field Theory. Thus we are collapsing two questions (“why does consciousness arise in our universe?” and “why does the universe exist?”) into a single question (“why does anything exist?”). More so, the question “why does anything exist?” may ultimately be solved with Zero Ontology. In other words, all that exists is implied by the universe having literally no information whatsoever. All (apparent) information is local; universally we live in an information-less quantum Library of Babel.

(2) Why and how is consciousness unitary?

Due to the expansion of the universe the universal wavefunction has topological bifurcations that effectively create locally connected networks of quantum entanglement that are unconnected to the rest of reality. These networks meet the criteria of being ontologically unitary while having the potential to hold multiple pieces of information at once. In other words, Pearce’s theory of consciousness postulates that the world is made of a large number of experiences, though the vast majority of them are incredibly tiny and short-lived. The overwhelming bulk of reality is made of decohered micro-experiences which are responsible for most of the phenomena we see in the macroscopic world ranging from solidity to Newton’s laws of motion.

A few of these networks of entanglement are us: you, right now, as a unitary “thin subject” of experience, according to this theory, are one of these networks (cf. Mereological Nihilism). Counter-intuitively, while a mountain is in some sense much bigger than yourself, at a deeper level you are bigger than the biggest object you will find in a mountain. Taking seriously the phenomenal binding problem we have to conclude that a mountain is for the most part just made of fields of decohered qualia, and thus, unlike a given biologically-generated experience, it is not “a unitary subject of experience”. In order to grasp this point it is necessary to contemplate a very extreme generalization of Empty Individualism: not only is it that every moment of a person’s experience is a different subject of experience, but the principle applies to every single network of entanglement in the entire multiverse. Only a tiny minority of these have anything to do with minds representing worlds. And even those that participate in the creation of a unitary experience exist within an ecosystem that gives rise to an evolutionary process in which quintillions of slightly different entanglement networks compete in order to survive in the extreme environments provided by nervous systems. Your particular experience is an entanglement network that evolved in order to survive in the specific brain state that is present right now. In other words, macroscopic experiences are the result of harnessing the computational power of Quantum Darwinism by applying it to a very particular configuration of the CNS. Brain states themselves encode Constraint Satisfaction Problems with the networks of electric gradients across firing neurons in sub-millisecond scales instantiating constraints whose solutions are found with sub-femtosecond quantum darwinism.

(3) How can consciousness be causally efficacious?

Consciousness exerts its causal power by virtue of being the only thing that exists. If anything is causal at all, it must, in the final analysis, be consciousness. No matter one’s ultimate theory of causality, assuming that physics describes the flux of qualia, then what instantiates such causality has to be this very flux.

Even under Eternalism/block view of the universe/Post-Everettian QM you can still meaningfully reconstruct causality in terms of the empirical rules for statistical independence across certain dimensions of fundamental reality. The dimensions that have time-like patterns of statistical independence will subjectively be perceived as being the arrows of time in the multiverse (cf. Timeless Causality).

Now an important caveat with this view of the relationship between qualia and causality is that it seems as if at least a weak version of epiphenomenalism must be true. The inverted spectrum thought experiment (ironically usually used in favor of the existence of qualia) can be used to question the causal power of qualia. This brings us to the fourth point:

(4) How do we explain the countless textures of consciousness?

How and why does consciousness have its countless textures and what determines its interrelationships? Pearce anticipates that someday we will have a Rosetta Stone for translating patterns of entanglement in quantum fields to corresponding varieties of qualia (e.g. colors, smells, sounds, etc.). Now, admittedly it seems far fetched that the different quantum fields and their interplay will turn out to be the source of the different qualia varieties. But is there something that in principle precludes this ontological correspondence? Yes, there are tremendous philosophical challenges here, the most salient of which might be the “being/form boundary”. This is the puzzle concerning why states of being (i.e. networks of entangled qualia) would act a certain way by virtue of their phenomenal character in and of itself (assuming its phenomenal character is what gives them reality to begin with). Indeed, what could possibly attach at a fundamental level the behavior of a given being and its intrinsic subjective texture? A compromise between full-fledged epiphenomenalism and qualia-based causality is to postulate a universal principle concerning the preference for full-spectrum states over highly differentiated ones. Consider for example how negative and positive electric charge “seek to cancel each other out”. Likewise, the Quantum Chromodynamics of quarks inside protons and neutrons works under a similar but generalized principle: color charges seek to cancel/complement each other out and become “white” or “colorless”. This principle would suggest that the causal power of specific qualia values comes from the gradient ascent towards more full-spectrum-like states rather than from the specific qualia values on their own.  If this were to be true, one may legitimately wonder whether hedonium and full-spectrum states are perhaps one and the same thing (cf. Valence structuralism). In some way this account of the “being/form boundary” is similar to process philosophy,  but unlike process philosophy, here we are also taking mereological nihilism and wavefuction monism seriously.

However far-fetched it may be to postulate intrinsic causal properties for qualia values, if the ontological unity of science is to survive, there might be no other option. As we’ve seen, simple “patterns of computation” or “information processing” cannot be the source of qualia, since nothing that isn’t a quantum coherent wavefunction actually has independent existence. Unitary minds cannot supervene on decohered quantum fields. Thus the various kinds of qualia have to be searched for in networks of quantum entanglement; within a physicalist paradigm there is nowhere else for them to be.

Alternative Theories

I am very open to the possibility that other theories of consciousness are able to address these four questions. I have yet to see any evidence of this, though. But, please, change my mind if you can! Does your theory of consciousness rise to the challenge?


* This particular set of criteria was proposed by David Pearce (cf. Qualia Computing in Tucson). I would agree with him that these are crucial questions; indeed they make up the bare minimum that such a theory must satisfy. That said, we can formulate more comprehensive sets of problems to solve. An alternative framework that takes this a little further can be found in Michael Johnson’s book Principia Qualia (Eight Problems for a New Science of Consciousness).

The Binding Problem

[Our] subjective conscious experience exhibits a unitary and integrated nature that seems fundamentally at odds with the fragmented architecture identified neurophysiologically, an issue which has come to be known as the binding problem. For the objects of perception appear to us not as an assembly of independent features, as might be suggested by a feature based representation, but as an integrated whole, with every component feature appearing in experience in the proper spatial relation to every other feature. This binding occurs across the visual modalities of color, motion, form, and stereoscopic depth, and a similar integration also occurs across the perceptual modalities of vision, hearing, and touch. The question is what kind of neurophysiological explanation could possibly offer a satisfactory account of the phenomenon of binding in perception?
One solution is to propose explicit binding connections, i.e. neurons connected across visual or sensory modalities, whose state of activation encodes the fact that the areas that they connect are currently bound in subjective experience. However this solution merely compounds the problem, for it represents two distinct entities as bound together by adding a third distinct entity. It is a declarative solution, i.e. the binding between elements is supposedly achieved by attaching a label to them that declares that those elements are now bound, instead of actually binding them in some meaningful way.
Von der Malsburg proposes that perceptual binding between cortical neurons is signalled by way of synchronous spiking, the temporal correlation hypothesis (von der Malsburg & Schneider 1986). This concept has found considerable neurophysiological support (Eckhorn et al. 1988, Engel et al. 1990, 1991a, 1991b, Gray et al. 1989, 1990, 1992, Gray & Singer 1989, Stryker 1989). However although these findings are suggestive of some significant computational function in the brain, the temporal correlation hypothesis as proposed, is little different from the binding label solution, the only difference being that the label is defined by a new channel of communication, i.e. by way of synchrony. In information theoretic terms, this is no different than saying that connected neurons posses two separate channels of communication, one to transmit feature detection, and the other to transmit binding information. The fact that one of these channels uses a synchrony code instead of a rate code sheds no light on the essence of the binding problem. Furthermore, as Shadlen & Movshon (1999) observe, the temporal binding hypothesis is not a theory about how binding is computed, but only how binding is signaled, a solution that leaves the most difficult aspect of the problem unresolved.
I propose that the only meaningful solution to the binding problem must involve a real binding, as implied by the metaphorical name. A glue that is supposed to bind two objects together would be most unsatisfactory if it merely labeled the objects as bound. The significant function of glue is to ensure that a force applied to one of the bound objects will automatically act on the other one also, to ensure that the bound objects move together through the world even when one, or both of them are being acted on by forces. In the context of visual perception, this suggests that the perceptual information represented in cortical maps must be coupled to each other with bi-directional functional connections in such a way that perceptual relations detected in one map due to one visual modality will have an immediate effect on the other maps that encode other visual modalities. The one-directional axonal transmission inherent in the concept of the neuron doctrine appears inconsistent with the immediate bi-directional relation required for perceptual binding. Even the feedback pathways between cortical areas are problematic for this function due to the time delay inherent in the concept of spike train integration across the chemical synapse, which would seem to limit the reciprocal coupling between cortical areas to those within a small number of synaptic connections. The time delays across the chemical synapse would seem to preclude the kind of integration apparent in the binding of perception and consciousness across all sensory modalities, which suggests that the entire cortex is functionally coupled to act as a single integrated unit.
— Section 5 of “Harmonic Resonance Theory: An Alternative to the ‘Neuron Doctrine’ Paradigm of Neurocomputation to Address Gestalt properties of perception” by Steven Lehar

Schrödinger’s Neurons: David Pearce at the “2016 Science of Consciousness” conference in Tucson

Abstract:

 

Mankind’s most successful story of the world, natural science, leaves the existence of consciousness wholly unexplained. The phenomenal binding problem deepens the mystery. Neither classical nor quantum physics seem to allow the binding of distributively processed neuronal micro-experiences into unitary experiential objects apprehended by a unitary phenomenal self. This paper argues that if physicalism and the ontological unity of science are to be saved, then we will need to revise our notions of both 1) the intrinsic nature of the physical and 2) the quasi-classicality of neurons. In conjunction, these two hypotheses yield a novel, bizarre but experimentally testable prediction of quantum superpositions (“Schrödinger’s cat states”) of neuronal feature-processors in the CNS at sub-femtosecond timescales. An experimental protocol using in vitro neuronal networks is described to confirm or empirically falsify this conjecture via molecular matter-wave interferometry.

 

For more see: https://www.physicalism.com/

 

(cf. Qualia Computing in Tucson: The Magic Analogy)

 


(Trivia: David Chalmers is one of the attendees of the talk and asks a question at 24:03.)

David Pearce on the “Schrodinger’s Neurons Conjecture”

My friend +Andrés Gómez Emilsson on Qualia Computing: LSD and Quantum Measurements: Can you see Schrödinger’s cat both dead and alive on acid?

 

Most truly radical intellectual progress depends on “crazy” conjectures. Unfortunately, few folk who make crazy conjectures give serious thought to extracting novel, precise, experimentally falsifiable predictions to confound their critics. Even fewer then publish the almost inevitable negative experimental result when their crazy conjecture isn’t confirmed. So kudos to Andrés for doing both!!

 

What would the world look like if the superposition principle never breaks down, i.e. the unitary Schrödinger dynamics holds on all scales, and not just the microworld? The naïve – and IMO mistaken – answer is that without the “collapse of the wavefunction”, we’d see macroscopic superpositions of live-and-dead cats, experiments would never appear to have determinate outcomes, and the extremely well tested Born rule (i.e. the probability of a result is the squared absolute value of the inner product) would be violated. Or alternatively, assuming DeWitt’s misreading of Everett, if the superposition principle never breaks down, then when you observe a classical live cat, or a classical dead cat, your decohered (“split”) counterpart in a separate classical branch of the multiverse sees a dead cat or a live cat, respectively.

 

In my view, all these stories rest on a false background assumption. Talk of “observers” and “observations” relies on a naïve realist conception of perception whereby you (the “observer”) somehow hop outside of your transcendental skull to inspect the local mind-independent environment (“make an observation”). Such implicit perceptual direct realism simply assumes – rather than derives from quantum field theory – the existence of unified observers (“global” phenomenal binding) and phenomenally-bound classical cats and individually detected electrons striking a mind-independent classical screen cumulatively forming a non-classical interference pattern (“local” phenomenal binding). Perception as so conceived – as your capacity for some sort of out-of-body feat of levitation – isn’t physically possible. The role of the mind-independent environment beyond one’s transcendental skull is to select states of mind internal to your world-simulation; the environment can’t create, or imprint its signature on, your states of mind (“observations”) – any more than the environment can create or imprint its signature on your states of mind while you’re dreaming.

 

Here’s an alternative conjecture – a conjecture that holds regardless of whether you’re drug-naïve, stone-cold sober, having an out-of-body experience on ketamine, awake or dreaming, or tripping your head off on LSD. You’re experiencing “Schrodinger’s cat” states right nowin virtue of instantiating a classical world-simulation. Don’t ask what’s it like to perceive a live-and-dead Schrödinger’s cat; ask instead what it’s like to instantiate a coherent superposition of distributed feature-processing neurons. Only the superposition principle allows you to experience phenomenally-bound classical objects that one naively interprets as lying in the mind-independent world. In my view, the universal validity of the superposition principle allows you to experience a phenomenally bound classical cat within a seemingly classical world-simulation – or perform experiments with classical-looking apparatus that have definite outcomes, and confirm the Born rule. Only the vehicle of individual coherent superpositions of distributed neuronal feature-processors allows organic mind-brains to run world simulations described by an approximation of classical Newtonian physics. In the mind-independent world – i.e. not the world of your everyday experience – the post-Everett decoherence program in QM pioneered by Zeh, Zurek et al. explains the emergence of an approximation of classical “branches” for one’s everyday world-simulations to track. Yet within the CNS, only the superposition principle allows you to run a classical world-simulation tracking such gross fitness-relevant features of your local extracranial environment. A coherent quantum mind can run phenomenally-bound simulations of a classical world, but a notional classical mind couldn’t phenomenally simulate a classical world – or phenomenally simulate any other kind of world. For a supposedly “classical” mind would just be patterns of membrane-bound neuronal mind-dust: mere pixels of experience, a micro-experiential zombie.

 

Critically, molecular matter-wave interferometry can in principle independently be used to test the truth – or falsity – of this conjecture (see: https://www.physicalism.com/#6).

 

OK, that’s the claim. Why would (almost) no scientifically informed person take the conjecture seriously?

 

In a word, decoherence.

 

On a commonsense chronology of consciousness, our experience of phenomenally bound perceptual objects “arises” via patterns of distributed neuronal firings over a timescale of milliseconds – the mystery lying in how mere synchronised firing of discrete, decohered, membrane-bound neurons / micro-experiences could generate phenomenal unity, whether local or global. So if the lifetime of coherent superpositions of distributed neuronal feature-processors in the CNS were milliseconds, too, then there would be an obvious candidate for a perfect structural match between the phenomenology of our conscious minds and neurobiology / fundamental physics – just as I’m proposing above. Yet of course this isn’t the case. The approximate theoretical lifetimes of coherent neuronal superpositions in the CNS can be calculated: femtoseconds or less. Thermally-induced decoherence is insanely powerful and hard to control. It’s ridiculous – intuitively at any rate – to suppose that such fleeting coherent superpositions could be recruited to play any functional role in the living world. An epic fail!

 

Too quick.
Let’s step back.
Many intelligent people initially found it incredible that natural selection could be powerful enough to throw up complex organisms as thermodynamically improbable as Homo sapiens. We now recognise that the sceptics were mistaken: the human mind simply isn’t designed to wrap itself around evolutionary timescales of natural selection playing out over hundreds of millions of years. In the CNS, another form of selection pressure plays out – a selection pressure over one hundred of orders of magnitude (sic) more powerful than selection pressure on information-bearing self-replicators as conceived by Darwin. “Quantum Darwinism” as articulated by Zurek and his colleagues isn’t the shallow, tricksy metaphor one might naively assume; and the profound implications of such a selection mechanism must be explored for the world-simulation running inside your transcendental skull, not just for the extracranial environment. At work here is unimaginably intense selection pressure favouring comparative resistance to thermally (etc)-induced decoherence [i.e. the rapid loss of coherence of complex phase amplitudes of the components of a superposition] of functionally bound phenomenal states of mind in the CNS. In my view, we face a failure of imagination of the potential power of selection pressure analogous to the failure of imagination of critics of Darwin’s account of human evolution via natural selection. It’s not enough lazily to dismiss sub-femtosecond decoherence times of neuronal superpositions in the CNS as the reductio ad absurdum of quantum mind. Instead, we need to do the interferometry experiments definitively to settle the issue, not (just) philosophize.

 

Unfortunately, unlike Andrés, I haven’t been able to think of a DIY desktop experiment that could falsify or vindicate the conjecture. The molecular matter-wave experiment I discuss in “Schrodinger’s Neurons” is conceptually simple but (horrendously) difficult in practice. And the conjecture it tests is intuitively so insane that I’m sometimes skeptical the experiment will ever get done. If I sound like an advocate rather than a bemused truth-seeker, I don’t mean to be so; but if phenomenal binding _isn’t _quantum-theoretically or classically explicable, then dualism seems unavoidable. In that sense, David Chalmers is right.

 

How come I’m so confident that superposition principle doesn’t break down in the CNS? After all, the superposition principle has been tested only up to the level of fullerenes, and no one yet has a proper theory of quantum gravity. Well, besides the classical impossibility of the manifest phenomenal unity of consciousness, and the cogent reasons that a physicist would give you for not modifying the unitary Schrödinger dynamics, the reason is really just a philosophical prejudice on my part. Namely, the universal validity of the superstition principle of QM offers the only explanation-space that I can think of for why anything exists at all: an informationless zero ontology dictated by the quantum analogue of the library of Babel.

 

We shall see.

– David Pearce, commenting on the latest significant article published on this blog.

Panpsychism and Compositionality: A solution to the hard problem

By Anand Rangarajan

 

We begin with the assumption that all emergentist approaches are inadequate to solve the hard problem of experience. Consequently, it’s hard to escape the conclusion that consciousness is fundamental and that some form of panpsychism is true. Unfortunately, panpsychism faces the combination problem – why should proto-experiences combine to form full fledged experiences? Since the combination problem has resisted many attempts, we argue for compositionality as the missing ingredient needed to explain mid level experiences such as ours. Since this is controversial, we carefully present the full argument below. To begin, we assume, following Frege, that experience cannot exist without being accompanied by a subject of experience (SoE). An SoE provides the structural and spatio-temporally bounded “container” for experience and following Strawson is conceived as a thin subject. Thin subjects exhibit a phenomenal unity with different types of phenomenal content (sensations, thoughts etc.) occurring during their temporal existence. Next, following Stoljar, we invoke our ignorance of the true physical as the reason for the explanatory gap between present day physical processes (events, properties) and experience. We are therefore permitted to conceive of thin subjects as physical compositions. Compositionality has been an intensely studied area in the past twenty years. While there is no clear consensus here, we argue, following Koslicki, that a case can be made for a restricted compositionality principle and that thin subjects are physical compositions of a certain natural kind. In this view, SoEs are natural kind objects with a yet to be specified compositionality relation connecting them to the physical world. The specifics of this relation will be detailed by a new physics and at this juncture, all we can provide are guiding metaphors. We suggest that the relation binding an SoE to the physical is akin to the relation between a particle and field. In present day physics, a particle is conceived as a coherent excitation of a field and is spatially and temporally bounded (with the photon being the sole exception). Under the right set of circumstances, a particle coalesces out of a field and dissipates. We suggest that an SoE can be conceived as akin to a particle coalescing out of physical fields, persisting for a brief period of time and then dissipating – in a manner similar to the phenomenology of a thin subject. Experiences are physical properties of SoEs with the constraint (specified by a similarity metric) that SoEs belonging to the same natural kind will have similar experiences. The counter-intuitive aspect of this proposal is the unexpected “complexity” exhibited by SoE particles but we have been prepared for this by the complex behavior of elementary particles in over ninety years of experimental physics. Consequently, while it is odd at first glance to conceive of subjects of experience as particles, the spatial and temporal unity exhibited by particles as opposed to fields and the expectation that SoEs are new kinds of particles, paves the way for cementing this notion. Panpsychism and compositionality are therefore new bedfellows aiding us in resolving the hard problem.

 

– Talk given at The Science of Consciousness 2016, held in Tucson Arizona (slides)

(cf. Qualia Computing in Tucson: The Magic Analogy)

LSD and Quantum Measurements: Can you see Schrödinger’s cat both dead and alive on acid?

[Content Warnings: Psychedelic Depersonalization, Fear of the Multiverse, Personal Identity Doubts, Discussion about Quantum Consciousness, DMT entities, Science]

The brain is wider than the sky,
For, put them side by side,
The one the other will include
With ease, and you beside.

– Emily Dickinson

Is it for real?

A sizable percentage of people who try a high dose of DMT end up convinced that the spaces they visit during the trip exist in some objective sense; they either suspect, intuit or conclude that their psychonautic experience reflects something more than simply the contents of their minds. Most scientists would argue that those experiences are just the result of exotic brain states; the worlds one travels to are bizarre (often useless) simulations made by our brain in a chaotic state. This latter explanation space forgoes alternate realities for the sake of simplicity, whereas the former envisions psychedelics as a multiverse portal technology of some sort.

Some exotic states, such as DMT breakthrough experiences, do typically create feelings of glimpsing foundational information about the depth and structure of the universe. Entity contact is frequent, and these seemingly autonomous DMT entities are often reported to have the ability to communicate with you. Achieving a verifiable contact with entities from another dimension would revolutionize our conception of the universe. Nothing would be quite as revolutionary, really. But how to do so? One could test the external reality of these entities by asking them to provide information that cannot be obtained unless they themselves held an objective existence. In this spirit, some have proposed to ask these entities complex mathematical questions that would be impossible for a human to solve within the time provided by the trip. This particular test is really cool, but it has the flaw that DMT experiences may themselves trigger computationally-useful synesthesia of the sort that Daniel Tammet experiences. Thus even if DMT entities appeared to solve extraordinary mathematical problems, it would still stand to reason that it is oneself who did it and that one is merely projecting the results into the entities. The mathematical ability would be the result of being lucky in the kind of synesthesia DMT triggered in you.

A common overarching description of the effects of psychedelics is that they “raise the frequency of one’s consciousness.” Now, this is a description we should take seriously whether or not we believe that psychedelics are inter-dimensional portals. After all, promising models of psychedelic action involve fast-paced control interruption, where each psychedelic would have its characteristic control interrupt frequency. And within a quantum paradigm, Stuart Hameroff has argued that psychedelic compounds work by bringing up the quantum resonance frequency of the water inside our neurons’ microtubules (perhaps going from megahertz to gigahertz), which he claims increases the non-locality of our consciousness.

In the context of psychedelics as inter-dimensional portals, this increase in the main frequency of one’s consciousness may be the key that allows us to interact with other realities. Users describe a sort of tuning of one’s consciousness, as if the interface between one’s self and the universe underwent some sudden re-adjustment in an upward direction. In the same vein, psychedelicists (e.g. Rick Strassman) frequently describe the brain as a two-way radio, and then go on to claim that psychedelics expand the range of channels we can be attuned to.

One could postulate that the interface between oneself and the universe that psychonauts describe has a real existence of its own. It would provide the bridge between us as (quantum) monads and the universe around us; and the particular structure of this interface would determine the selection pressures responsible for the part of the multiverse that we interact with. By modifying the spectral properties of this interface (e.g. by drastically raising the main frequency of its vibration) with, e.g. DMT, one effectively “relocates” (cf. alien travel) to other areas of reality. Assuming this interface exists and that it works by tuning into particular realities, what sorts of questions can we ask about its properties? What experiments could we conduct to verify its existence? And what applications might it have?

The Psychedelic State of Input Superposition

Once in a while I learn about a psychedelic effect that captures my attention precisely because it points to simple experiments that could distinguish between the two rough explanation spaces discussed above (i.e. “it’s all in your head” vs. “real inter-dimensional travel”). This article will discuss a very odd phenomenon whose interpretations do indeed have different empirical predictions. We are talking about the experience of sensing what appears to be a superposition of inputs from multiple adjacent realities. We will call this effect the Psychedelic State of Input Superposition (PSIS for short).

There is no known way to induce PSIS on purpose. Unlike the reliable DMT hyper-dimensional journeys to distant dimensions, PSIS is a rare closer-to-home effect and it manifests only on high doses of LSD (and maybe other psychedelics). Rather than feeling like one is tuning into another dimension in the higher frequency spectrum, it feels as if one just accidentally altered (perhaps even broke) the interface between the self and the universe in a way that multiplies the number of realities you are interacting with. After the event, the interface seems to tune into multiple similar universes at once; one sees multiple possibilities unfold simultaneously. After a while, one somehow “collapses” into only one of these realities, and while coming down, one is thankful to have settled somewhere specific rather than remaining in that weird in-between. Let’s take a look at a couple of trip reports that feature this effect:

[Trip report of taking a high dose of LSD on an airplane]: So I had what you call “sonder”, a moment of clarity where I realized that I wasn’t the center of the universe, that everyone is just as important as me, everyone has loved ones, stories of lost love etc, they’re the main character in their own movies.

 

That’s when shit went quantum. All these stories begun sinking in to me. It was as if I was beginning to experience their stories simultaneously. And not just their stories, I began seeing the story of everyone I had ever met in my entire life flash before my eyes. And in this quantum experience, there was a voice that said something about Karma. The voice told me that the plane will crash and that I will be reborn again until the quota of my Karma is at -+0. So, for every ill deed I have done, I would have an ill deed committed to me. For every cheap T-shirt I purchased in my previous life, I would live the life of the poor Asian sweatshop worker sewing that T-shirt. For every hooker I fucked, I would live the life of a fucked hooker.

 

And it was as if thousands of versions of me was experiencing this moment. It is hard to explain, but in every situation where something could happen, both things happened and I experienced both timelines simultaneously. As I opened my eyes, I noticed how smoke was coming out of the top cabins in the plane. Luggage was falling out. I experienced the airplane crashing a thousand times, and I died and accepted death a thousand times, apologizing to the Karma God for my sins. There was a flash of the brightest white light imagineable and the thousand realities in which I died began fading off. Remaining was only one reality in which the crash didn’t happen. Where I was still sitting in the plane. I could still see the smoke coming out of the plane and as a air stewardess came walking by I asked her if everything was alright. She said “Yes, is everything alright with YOU?”.

 

— Reddit user I_DID_LSD_ON_A_PLANE, in r/BitcoinMarkets (why there? who knows).

Further down on the same thread, written by someone else:

[A couple hours after taking two strong hits of LSD]: Fast-forward to when I’m peaking hours later and I find myself removed from the timeline I’m in and am watching alternate timelines branch off every time someone does something specific. I see all of these parallel universes being created in real time, people’s actions or interactions marking a split where both realities exist. Dozens of timelines, at least, all happening at once. It was fucking wild to witness.

 

Then I realize that I don’t remember which timeline I originally came out of and I start to worry a bit. I start focusing, trying to remember where I stepped out of my particular universe, but I couldn’t figure it out. So, with the knowledge that I was probably wrong, I just picked one to go back into and stuck with it. It’s not like I would know what changed anyway, and I wasn’t going to just hang out here in the whatever-this-place-is outside of all of them.

 

Today I still sometimes feel like I left a life behind and jumped into a new timeline. I like it, I feel like I left a lot of baggage behind and there are a lot of regrets and insecurities I had before that trip that I don’t have anymore. It was in a different life, a different reality, so in this case the answer I found was that it’s okay to start over when you’re not happy with where you are in life.

 

— GatorAutomator

Let us summarize: Person X takes a lot of LSD. At some point during the trip (usually after feeling that “this trip is way too intense for me now”) X starts experiencing sensory input from what appear to be different branches of the multiverse. For example, imagine that person X can see a friend Y sitting on a couch in the corner. Suppose that Y is indecisive, and that as a result he makes different choices in different branches of the multiverse. If Y is deciding whether to stand up or not, X will suddenly see a shadowy figure of Y standing up while another shadowy figure of Y remains sitting. Let’s call them Y-sitting and Y-standing. If Y-standing then turns indecisive about whether to drink some water or go to the bathroom, X may see one shadowy figure of Y-standing getting water and a shadowy figure of Y-standing walking towards the bathroom, all the while Y-sitting is still on the couch. And so it goes. The number of times per second that Y splits and the duration of the perceived superposition of these splits may be a function of X’s state of consciousness, the substance and dose consumed, and the degree of indecision present in Y’s mind.

The two quotes provided are examples of this effect, and one can find a number of additional reports online with stark similarities. There are two issues at hand here. First, what is going on? And second, can we test it? We will discuss three hypotheses to explain what goes on during PSIS, propose an experiment to test the third one (the Quantum Hypothesis), and provide the results of such an experiment.

Hard-nosed scientists may want to skip to the “Experiment” section, since the following contains a fair amount of speculation (you have been warned).

Three Hypothesis for PSIS: Cognitive, Spiritual, Quantum

In order to arrive at an accurate model of the world, one needs to take into account both the prior probability of the hypothesis and the likelihoods that they predict that one would obtain the available evidence. Even if one prior of yours is extremely strong (e.g. a strong belief in materialism), it is still rational to update one’s probability estimates of alternative hypotheses when new relevant evidence is provided. The difficulty often comes from finding experiments where the various hypotheses generate very different likelihoods for one’s observations.  As we will see, the quantum hypothesis has this characteristic: it is the only one that would actually predict a positive result for the experiment.

The Cognitive Hypothesis

The first (and perhaps least surreal) hypothesis is that PSIS is “only in one’s mind”. When person X sees person Y both standing up and staying put, what may be happening is that X is receiving photons only from Y-standing and that Y-sitting is just a hallucination that X’s inner simulation of her environment failed to erase.

Psychedelics intensify one’s experience, and this is thought to be the result of control interruption. This means that inhibition of mental content by cortical feedback is attenuated. In the psychedelic state, sensory impressions, automatic reactions, feelings, thoughts and all other mental contents are more intense and longer-lived. This includes the predictions that you make about how your environment will evolve. Not only is one’s sensory input perceived as more intense, one’s imagined hypotheticals are also perceived more intensely.

Under normal circumstances, cortical inhibition makes our failed predictions quickly disappear. Psychedelic states of consciousness may be poor at inhibiting these predictions. In this account, X may be experiencing her brain’s past predictions of what Y could have done overlaid on top of the current input that she is receiving from her physical environment. In a sense, she may be experiencing all of the possible “next steps” that she simply intuited. While these simulations typically remain below the threshold of awareness (or just above it), on a psychedelic state they may reinforce themselves in unpredictable ways. X’s mind never traveled anywhere and there is nothing really weird going on. X is just experiencing the aftermath of a specific failure of information processing concerning the inhibition of past predictions.

Alternatively, very intense emotions such as those experienced on intense ego-killing psychedelic experiences may distort one’s perception so much that one begins to suspect that one is perhaps dead or in another dimension. We can posit that the belief that one is not properly connected to one’s brain (or that one is dying) can trigger even stronger emotions and unleash a cascade of further distortions. This positive feedback loop may create episodes of intense confusion and overlapping pieces of information, which later might be interpreted as “seeing splitting universes”.

The Spiritual Hypothesis

Many spiritual traditions postulate the existence of alternate dimensions, additional layers of reality, and hidden spirit pathways that connect all of reality. These traditions often provide rough maps of these realities and may claim that some people are able to travel to such far-out regions with mental training and consciousness technologies. For illustration, let’s consider Buddhist cosmology, which describes 31 planes of existence. Interestingly, one of the core ideas of this cosmology is that the major characteristic that distinguishes the planes of existence is the states of consciousness typical of their inhabitants. These states of consciousness are correlated with moral conditions such as the ethical quality of their past deeds (karma), their relationship with desire (e.g. whether it is compulsive, sustainable or indifferent) and their existential beliefs. In turn, a feature of this cosmology is that it allows inter-dimensional travel by changing one’s state of consciousness. The part of the universe one interacts with is a function of one’s karma, affinities and beliefs. So by changing these variables with meditation (or psychedelic medicine) one can also change which world we exist in.

An example of a very interesting location worth trying to travel to is the mythical city of Shambhala, the location of the Kalachakra Tantra. This city has allegedly turned into a pure land thanks to the fact that its king converted to Buddhism after meeting the Buddha. Pure lands are abodes populated by enlightened and quasi-enlightened beings whose purpose is to provide an optimal teaching environment for Buddhism. One can go to Shambhala by either reincarnating there (with good karma and the help of some pointers and directions at the time of death) or by traveling there directly during meditation. In order to do the latter, one needs to kindle one’s subtle energies so that they converge on one’s heart, while one is embracing the Bodhisattva ethic (focusing on reducing others’ suffering as a moral imperative). Shambhala may not be in a physical location accessible to humans. Rather, Buddhist accounts would seem to depict it as a collective reality built by people which manifests on another plane of existence (specifically somewhere between the 23rd and 27th layer). In order to create a place like that one needs to bring together many individuals in a state of consciousness that exhibits bliss, enlightenment and benevolence. A pure land has no reality of its own; its existence is the result of the states of consciousness of its inhabitants. Thus, the very reason why Shambhala can even exist as a place somewhere outside of us is because it is already a potential place that exists within us.

Similar accounts of a wider cosmological reality can be found elsewhere (such as Hinduism, Zoroastrianism, Theosophy, etc.). These accounts may be consistent with the sort of experiences having to do with astral travel and entity contact that people have while on DMT and other psychedelics in high doses. However, it seems a lot harder to explain PSIS with an ontology of this sort. While reality is indeed portrayed as immensely vaster than what science has shown so far, we do not really encounter claims of parallel realities that are identical to ours except that your friend decided to go to the bathroom rather than drink some water just now. In other words, while many spiritual ontologies are capable of accommodating DMT hyper-dimensional travel, I am not aware of any spiritual worldview that also claims that whenever two things can happen, they both do in alternate realities (or, more specifically, that this leads to reality splitting).

The only spiritual-sounding interpretation of PSIS I can think of is the idea that these experiences are the result of high-level entities such as guardians, angels or trickster djinns who used your LSD state to teach you a lesson in an unconventional way. The first quote (the one written by Reddit user I_DID_LSD_ON_A_PLANE) seems to point in this direction, where the so-called Karma God is apparently inducing a PSIS experience and using it to illustrate the idea that we are all one (i.e. Open Individualism). Furthermore, the experience viscerally portrays the way that this knowledge should impact our feelings of self-importance (by creating a profound feeling of sonder). This way, the tripper may develop a lasting need to work towards peace, wisdom and enlightenment for the benefit of all sentient beings.

Life as a learning experience is a common trope among spiritual worldviews. It is likely that the spiritual interpretations that emerge in a state of psychedelic depersonalization and derealization will depend on one’s pre-existing ideas of what is possible. The atonement of one’s sins, becoming aware of one’s karma, feeling our past lives, realizing emptiness, hearing a dire mystical warning, etc. are all ideas that already exist in human culture. In an attempt to make sense- any sense- of the kind of qualia experienced in high doses of psychedelics, our minds may be forced to instantiate grandiose delusions drawn from one’s reservoir of far-out ideas.

On a super intense psychedelic experience in which one’s self-models fail dramatically and one experiences fear of ego dissolution, interpreting what is happening as the result of the Karma God judging you and then giving you another chance at life can viscerally seem to make a lot of sense at the time.

The Quantum Hypothesis

For the sake of transparency I must say that we currently do not have a derivation of PSIS from first principles. In other words, we have not yet found a way to use the postulates of quantum mechanics to account for PSIS (that is, assuming that the cognitive and spiritual hypothesis are not the case). That said, there are indeed some things to be said here: While a theory is missing, we can at least talk about what a quantum mechanical account of PSIS would have to look like. I.e. we can at least make sense of some of the features that the theory would need to have to predict that people on LSD would be able to see the superposition of macroscopic branches of the multiverse.

Why would being on acid allow you to receive input from macroscopic environments that have already decohered? How could taking LSD possibly prevent the so-called collapse of the wavefunction? You might think: “well, why even think about it? It’s simply impossible because the collapse of the wavefunction is an axiom of quantum mechanics and we know it is true because some of the predictions made by quantum mechanics (such as QED) are in agreement with experimental data up to the 12th decimal point.” Before jumping to this conclusion, though, let us remember that there are several formulations of quantum mechanics. Both the Born rule (which determines the probability of seeing different outcomes from a given quantum measurement) and the collapse of the wavefunction (i.e. that any quantum state other than the one that was measured disappears) are indeed axiomatic for some formulations. But other formulations actually derive these features and don’t consider them fundamental. Here is Sean Carroll explaining the usual postulates that are used to teach quantum mechanics to undergraduate audiences:

The status of the Born Rule depends greatly on one’s preferred formulation of quantum mechanics. When we teach quantum mechanics to undergraduate physics majors, we generally give them a list of postulates that goes something like this:

  1. Quantum states are represented by wave functions, which are vectors in a mathematical space called Hilbert space.
  2. Wave functions evolve in time according to the Schrödinger equation.
  3. The act of measuring a quantum system returns a number, known as the eigenvalue of the quantity being measured.
  4. The probability of getting any particular eigenvalue is equal to the square of the amplitude for that eigenvalue.
  5. After the measurement is performed, the wave function “collapses” to a new state in which the wave function is localized precisely on the observed eigenvalue (as opposed to being in a superposition of many different possibilities).

In contrast, here is what you need to specify for the Everett (Multiple Worlds) formulation of quantum mechanics:

  1. Quantum states are represented by wave functions, which are vectors in a mathematical space called Hilbert space.
  2. Wave functions evolve in time according to the Schrödinger equation.

And that’s it. As you can see this formulation does not employ any collapse of the wavefunction, and neither does it consider the Born rule as a fundamental law. Instead, the wavefunction is thought to merely seem to collapse upon measurement (which is achieved by nearly diagonalizing its components along the basis of the measurement; strictly speaking, neighboring branches never truly stop interacting, but the relevance of their interaction approaches zero very quickly). Here the Born rule is derived from first principles rather than conceived as an axiom. How exactly one can derive the Born rule is a matter of controversy, however. Currently, two very promising theoretical approaches to do so are Quantum Darwinism and the so-called Epistemic Separability Principle (ESP for short, a technical physics term not to be confused with Extra Sensory Perception). Although these approaches to deriving the Born rule are considered serious contenders for a final explanation (and they are not mutually exclusive), they have been criticized for being somewhat circular. The physics community is far from having a consensus on whether these approaches truly succeed.

Is there any alternative to either axiomatizing or deriving the apparent collapse and the Born rule? Yes, there is an alternative: we can think of them as regularities contingent upon certain conditions that are always (or almost always) met in our sphere of experience, but that are not a universal fact about quantum mechanics. Macroscopic decoherence and Born rule probability assignments work very well in our everyday lives, but they may not hold universally. In particular -and this is a natural idea to have under any view that links consciousness and quantum mechanics- one could postulate that one’s state of consciousness influences the mind-body interaction in such a way that information from one’s quantum environment seeps into one’s mind in a different way.

Don’t get me wrong; I am aware that the Born rule has been experimentally verified with extreme precision. I only ask that you bear in mind that many scientific breakthroughs share a simple form: they question the constancy of certain physical properties. For example, Einstein’s theory of special relativity worked out the implications of the fact that the speed of light is observer-independent. In turn this makes the passage of time of external systems observer-dependent. Scientists had a hard time believing Einstein when he arrived at the conclusion that accelerating our frame of reference to extremely high velocities could dilate time. What was thought to be a constant (the passage of time throughout the universe) turned out to be an artifact of the fact that we rarely travel fast enough to notice any deviation from Newton’s laws of motion. In other words, our previous understanding was flawed because it assumed that certain observations did not break down in extreme conditions. Likewise, maybe we have been accidentally ignoring a whole set of physically relevant extreme conditions: altered states of consciousness. The apparent wavefunction collapse and the Born rule may be perfectly constant in our everyday frame of reference, and yet variable across the state-space of possible conscious experiences. If this were the case, we’d finally understand why it seems so hard to derive the Born rule from first principles: it’s impossible.

Succinctly, the Quantum Hypothesis is that psychedelic experiences modify the way one’s mind interacts with its quantum environment in such a way that the world does not appear to decohere any longer from one’s point of view. Our ignorance about the non-universality of the apparent collapse of the wavefunction is just a side effect of the fact that physicists do not usually perform experiments during intense life-changing entheogenic mind journeys. But for science, today we will.

Deriving PSIS with Quantum Mechanics

Here we present a rough (incomplete) sketch of what a possible derivation of PSIS from quantum mechanics might look like. To do so we need three background assumptions: First, conscious experiences must be macroscopic quantum coherent objects (i.e. ontologically unitary subsets of the universal wavefunction, akin to super-fluid helium or Bose–Einstein condensates, except at room temperature). Second, people’s decision-making process must somehow amplify low-level quantum randomness into macroscopic history bifurcations. And third, the properties of our quantum environment* are in part the result of the quantum state of our mind, which psychedelics can help modify. This third assumption brings into play the idea that if our mind is more coherent (e.g. is in a super-symmetrical state) it will select for wavefunctions in its environment that themselves are more coherent. In turn, the apparent lifespan of superpositions may be elongated long enough so that the quantum environment of one’s mind receives records from both Y-sitting and Y-standing as they are overlapping. Now, how credible are these three assumptions?

That events of experience are macroscopic quantum coherent objects is an explanation space usually perceived as pseudo-scientific, though a sizable number of extremely bright scientists and philosophers do entertain the idea very seriously. Contrary to popular belief, there are legitimate reasons to connect quantum computing and consciousness. The reasons for making this connection include the possibility of explaining the causal efficacy of consciousness, finding an answer to the palette problem with quantum fields and solving the phenomenal binding problem with quantum coherence and panpsychism.

The second assumption claims that people around you work as quantum Random Number Generators. That human decision-making amplifies low-level quantum randomness is thought to be likely by at least some scientists, though the time-scale on which this happens is still up for debate. The brain’s decision-making is chaotic, and over the span of seconds it may amplify quantum fluctuations into macroscopic differences. Thus, people around you making decisions may result in splitting universes (e.g. “[I] am watching alternate timelines branch off every time someone does something specific.” – GatorAutomator’s quote above). Presumably, this assumption would also imply that during PSIS not only people but also physics experiments would lead to apparent macroscopic superposition.

With regards to the third assumption: widespread microscopic decoherence is not, apparently, a necessary consequence of the postulates of quantum mechanics. Rather, it is a very specific outcome of (a) our universe’s Hamiltonian and (b) the starting conditions of our universe, i.e. Pre-Inflation/Eternal Inflation/Big Bang. (A Ney & D Albert, 2013). In principle, psychedelics may influence the part of the Hamiltonian that matters for the evolution of our mind’s wavefunction and its local interactions. In turn, this may modify the decoherence patterns of our consciousness with its local environment and- perhaps- ultimately the surrounding macroscopic world. Of course we do not know if this is possible, and I would have to agree that it is extremely far-fetched.

The overall picture that would emerge from these three assumptions would take the following form: both the mental content and raw phenomenal character of our states of consciousness are the result of the quantum micro-structure of our brains. By modifying this micro-structure, one is not only altering the selection pressures that give rise to fully formed experiences (i.e. quantum darwinism applied to the compositionality of quantum fields) but also altering the selection pressures that determine which parts of the universal wave-function we are entangled with (i.e. quantum darwinism applied to the interactions between coherent objects). Thus psychedelics may not only influence how our experience is shaped within, but also how it interacts with the quantum environment that surrounds it. Some mild psychedelic states (e.g. MDMA) may influence mostly the inner degrees of freedom of one’s mind, while other more intense states (e.g. DMT) may be the result of severe changes to the entanglement selection pressures and thus result in the apparent disconnection between one’s mind and one’s local environment. Here PSIS would be the result of decreasing the rate at which our mind decoheres (possibly by increasing the degree to which our mind is in a state of quantum confinement). In turn, by boosting one’s own inner degree of quantum superposition one may also broaden the degree of superposition acceptable at the interface with one’s quantum environment. One could now readily take in packets of information that have a wider degree of superposition. In the right circumstances, this may result in one’s mind experiencing information seemingly coming from alternate branches of the multiverse. In other words, the trick to PSIS both in the Quantum and the Spiritual Hypothesis is the same (though for different reasons): travel to other dimensions by being the change that you wish to see in the world. You need to increase your own degree of quantum coherence so that you become able of interacting with a more coherent quantum environment.

If this were the case it would call for a conceptual revolution. We would stop thinking of “our universe” as the “place we inhabit” and instead think of it as “the quantum environment our consciousness can interface with”. Perhaps life as we know it exists in a quantum niche that can give rise to embodied biological beings, but the universal wavefunction also contains alien state-spaces in which the patterns of entanglement work in different ways and lead to different evolutionary histories. Our minds evolved to interact with the particular environment that leads to the sustenance of our everyday macroscopic world. Our consciousness is shaped to fit in here, and it has a symbiotic relationship with the body that hosts it.

Now, how do we go about testing the quantum hypothesis? Following previous work on psychophysics for psychedelic research, the main idea will be to show that people on psychedelics can achieve something that sober individuals can’t. In this case, the task is designed to be impossible if one is only receiving input from one world (as predicted by the cognitive hypothesis). Only if PSIS is the result of simultaneously experiencing input from the several nearby branches that correspond to different outcomes of quantum events can the task be successfully performed.

The Experiment

Setup

The task in question involves three white little squares on a black background. At each time step (~16 milliseconds) the squares move one pixel down, and some number of pixels horizontally. While the downward movement is constant, the change in the horizontal direction is chosen randomly. The horizontal movement of one of the squares is being randomized on the fly with a Quantum Random Number Generator (qRNG**). We call this the quantum square. The horizontal movement of the other two squares follows a path that was determined at the beginning of each trial by a Classical Random Number Generator (cRNG***). We will refer to these two squares as the classical squares. The task for the participant is to identify which of the squares is the “odd one out” (i.e. the quantum square). Here is an infographic that explains the experimental setup:

infografic_of_experiment.png

In order to make the horizontal movement seem very chaotic and, at times even dramatic, it is randomized by choosing a horizontal change of either -4, -1, 1 or 4 pixels. By using a jump of 4 pixels (rather than e.g. 2) the squares sometimes seem to advance very quickly in one direction. The rationale for doing this was to make the predicted effect (assuming the Quantum Hypothesis) more noticeable – we want low probability paths that are extremely different than the typical path and in turn make it easy to distinguish between single paths and the superposition of many of the possible paths at once.

Critically, the starting position of the quantum square (whether left, center or right) for each of the trials during an experiment is obtained with a cRNG at the very beginning of the experiment. Why not use a qRNG? If we randomized the position of the quantum square using a qRNG we might be sabotaging ourselves: participants might see a superposition of classical and quantum squares on each of the three positions and thus not see any square behave differently than the other two. Thus, for the experiment to work (under the assumption that PSIS is explained by the Quantum Hypothesis) the only aspect that should be randomized using a qRNG is the horizontal displacement of the quantum square.

Since the numbers coming from the qRNG need to be requested over the internet, it is also crucial that we try to obtain fresh quantum random numbers as frequently as possible without messing up the flow of the experiment. Thus, a request for new quantum random numbers is made as often as possible (usually every 1 to 2 seconds). Since internet connections, server response time and network reliability varies from moment to moment, it is inevitable that once in a while we will have to stop the downward movement of all three squares as we wait for the next batch of random numbers. In principle this should have no negative effect at all in the ability to choose the odd-one-out. This problem merely interrupts the flow of the experiment, but it should not invalidate the results. We kept track of the number of trials that were interrupted for this reason (which you will find recorded in the “Network” column of the table of results down below).

Participants were instructed to:

  1. Clone the Github code for the experiment, and to open it up in a Chrome browser.
  2. Make sure that they have a reliable high-speed internet connection.
  3. Record the pertinent facts (time of experiment, substance, dose and time of intake).
  4. Use a sensorially-minimized room for the experiment (such as a closed bathroom), turn off the lights, ask anyone who is in the house to be quiet and mind their own business, and wear earplugs.
  5. Meditate for a couple of minutes before conducting the experiment and to try to induce PSIS voluntarily.
  6. Choose the odd-one-out (i.e. the quantum square) on each trial, for a total of at least 30 trials.

Under this setup, what should the participants expect to see if the Quantum Hypothesis is correct?

Predicted Psychedelic Perception

The pictures above illustrate how the experiment looks like to three different observers. On the left we see what a sober participant would see at a given time during a trial. The squares are moving downwards but since we are taking a snapshot in time they seem to be static. The image on the center shows what we would expect out of a generic psychedelic experience. In brief, the typical control interrupt action of psychedelics (i.e. tracers) effectively allows you to see more clearly the path that the squares have traced over time, but it would not allow you to notice any difference between the classic and quantum squares. The image on the right shows what we would expect to see if the Quantum Hypothesis is correct and PSIS is actually about perceiving nearby branches of the Everett multiverse. Notice how the center square is visibly different than the other two: it consists of the superposition of many alternative paths the square took in slightly different branches.

Implications of a Positive Result: Quantum Mind, Everett Rescue Missions and Psychedelic Cryptography

It is worth noting that if one can indeed reliably distinguish between the quantum and the classical squares, then this would have far-reaching implications. It would indeed confirm that our minds are macroscopic quantum coherent objects and that psychedelics influence their pattern of interactions with their surrounding quantum environment. It would also provide strong evidence in favor of the Everett interpretation of quantum mechanics (in which all possibilities are realized). More so, we would not only have a new perspective on the fundamental nature of the universe and the mind, but the discovery would just as well suggest some concrete applications. Looking far ahead, a positive outcome is that this knowledge would encourage research on the possible ways to achieve inter-dimensional travel, and in turn instantiate pan-Everettian rescue missions to reduce suffering elsewhere in the multiverse. The despair of confirming that the quantum multiverse is real might be evened out by the hope of finally being able to help sentient beings trapped in Darwinian environments in other branches of the universal wavefunction. Looking much closer to home, a positive result would lead to a breakthrough in psychedelic cryptography (PsyCrypto for short), where spies high on LSD would obtain the ability to read information that is secretly encoded in public light displays. More so, this particular kind of PsyCrypto would be impervious to discovery after the fact. Even if given an arbitrary amount of time and resources to analyze a video recording of the event, it would not be possible to determine which of the squares was being guided by quantum randomness. Unlike other PsyCrypto techniques, this one cannot be decoded by applying psychedelic replication software to video recordings of the transmission.

Results

Three persons participated in the experiments: S (self), A, and B. [A and B are anonymous volunteers; for more information read the legal disclaimer at the end of this article]. Participant S (me) tried the experiment both sober and after drinking 2 beers. Participant A tried the experiment sober, on LSD, 2C-B and a combination of the two. And participant B tried the experiment both sober and on DMT. The total number of trials recorded for each of the conditions is: 90 for the sober state, 275 for 2C-B, 60 for DMT, 120 for LSD and 130 for the LSD/2C-B combo. The overall summary of the results is: chance level performance outcomes for all conditions. You can find the breakdown of results for all experiments in the table shown below, and you can download the raw csv file from the Github repository.

results_to_show
Columns from left to right: Date, State (of consciousness), Dose(s), T (time), #Trials (number of trials), Correct (number of trials in which the participant made the correct choice), Percent correct (100*Correct/Trials), Participants (S=Self, A/B=anonymous volunteers), Requests / Second (server requests per second), Network (this tracks the number of times that a trial was temporarily paused while the browser was waiting for the next batch of quantum random numbers), Notes (by default the squares left a dim trail behind them and this was removed in two trials; by default the squares were 10×10 pixels in size, but a smaller size was used in some trials).

I thought about visualizing the results in a cool graph at first, but after I received them I realized that it would be pointless. Not a single experiment reached a statistically significant deviation from chance level; who is interested in seeing a bunch of bars representing chance-level outcomes? Null results are always boring to visualize.****

In addition to the overall performance in the task, I also wanted to hear the following qualitative assessment from the participants: did they notice any difference between the three squares? Was there any feeling that one of them was behaving differently than the other two? This is what they responded when I asked them: “I could never see any difference between the squares, so it felt like I was making random choices” (from A) and “DMT made the screen look like a hyper-dimensional tunnel and I felt like strange entities were watching over me as I was doing the experiment, and even though the color of the squares would fluctuate randomly, I never noticed a single square behaving differently than the other two. All three seemed unique. I did feel that the squares were being controlled by some entity, as if with an agency of their own, but I figured that was made up by my mind.” When I asked them if they noticed anything similar to the image labeled Psychedelic view as predicted by the Quantum Hypothesis (as shown above) they both said “no”.

Discussion

It is noteworthy that neither participant reported an experience of PSIS during the experiments. Even without an explicit and noticeable input superposition, PSIS may turn out to be a continuum rather than a discrete either-or phenomenon. If so, we might still expect to see some deviations from chance. This may be analogous to how in blindsight people report not being able to see anything and yet perform better than chance in visual recognition tasks. That said, the effect size of blindsight and other psychological effects in which information is processed unbeknownst to the participant tend to be very small. Thus, in order to confirm that quantum PSIS is happening below the threshold of awareness we may require a much larger number of samples (though still a lot smaller than what we would need if we were aiming to use the experiment to conduct Psi research with or without psychedelics, again, due to the extremely small effect sizes).

Why did the experiment fail? The first possibility is that it could be that the Quantum Hypothesis is simply wrong (and possibly because it requires false assumptions to work). Second, perhaps we were simply unlucky that PSIS was not triggered during the experiments; perhaps the set, setting, and dosages used simply failed to produce the desired effect (even if the state does indeed exist out there). And third, the experiment itself may be wrong: the second-long delays between the server requests and the qRNG may be too large to produce the effect. In the current implementation (and taking into account network delays), the average delay between the moment the quantum measurement was conducted and the moment it appeared on the computer screen as horizontal movement was .9 seconds (usually in the range of .4 to 1.4 seconds, given an average of 1/2 second lag due to the number buffering and 400 milliseconds in network time). This problem would be easily sidestepped if we used an on-site qRNG obtained from hardware directly connected to the computer (as is common in psi research). To minimize the delay even further, the outcomes of the quantum measurements could be delivered directly to your brain via neuroimplants.

Conclusion

If psychedelic experiences do make you interact with other realities, I would like to know about it with a high degree of certainty. The present study was admittedly a very long shot. But to my judgement, it was totally worth it. As Bayesians, we reasoned that since the Quantum Hypothesis can lead to a positive result for the experiment but the Cognitive Hypothesis can’t, then a positive result should make us update our probabilities of the Quantum Hypothesis a great deal. A negative result should make us update our probabilities in the opposite direction. That said, the probability should still not go to zero since the negative result could still be accounted for by the fact that participants failed to experience PSIS, and/or that the delay between the quantum measurement and the moment it influences the movement of the square in the screen is too large. Future studies should try to minimize these two possible sources of failure. First, by researching methods to reliably induce PSIS. And second, by minimizing the delay between branching and sensory input.

In the meantime, we can at least tentatively conclude that something along the lines of the Cognitive Hypothesis is the most likely case. In this light, PSIS turns out to be the result of a failure to inhibit predictions. Despite losing their status as suspected inter-dimensional portal technology, psychedelics still remain a crucial tool for qualia research. They can help us map out the state-space of possible experiences, allow us to identify the computational properties of consciousness, and maybe even allow us to reverse engineer the fundamental nature of valence.


[Legal Disclaimer]: Both participants A and B contacted me some time ago, soon after the Qualia Computing article How to Secretly Communicate with People on LSD made it to the front page of Hacker News and was linked by SlateStarCodex. They are both experienced users of psychedelics who take them about once a month. They expressed their interest in performing the psychophysics experiments I designed, and to do so while under the influence of psychedelic drugs. I do not know these individuals personally (nor do I know their real names, locations or even their genders). I have never encouraged these individuals to take psychedelic substances and I never gave them any compensation for their participation in the experiment. They told me that they take psychedelics regularly no matter what, and that my experiments would not be the primary reason for taking them. I never asked them to take any particular substance, either. They just said “I will take substance X on day Y, can I have some experiment for that?” I have no way of knowing (1) if the substances they claim they take are actually what they think they are, (2) whether the dosages are accurately measured, and (3) whether the data they provided is accurate and isn’t manipulated. That said, they did explain that they have tested their materials with chemical reagents, and are experienced enough to tell the difference between similar substances. Since there is no way to verify these claims without compromising their anonymity, please take the data with a grain of salt.

* In this case, the immediate environment would actually refer to the quantum degrees of freedom surrounding our consciousness within our brain, not the macroscopic exterior vicinity such as the chair we are sitting on or the friends we are hanging out with. In this picture, our interaction with that vicinity is actually mediated by many layers of indirection.

** The experiment used the Australian National University Quantum Random Numbers Server. By calling their API every 1 to 2 seconds we obtain truly random numbers that feed the x-displacement of the quantum square. This is an inexpensive and readily-available way to magnify decoherence events into macroscopic splitting histories in the comfort of your own home.

*** In this case, Javascript’s Math.random() function. Unfortunately the RGN algorithm varies from browser to browser. It may be worthwhile to go for a browser-independent implementation in the future to guarantee a uniform high quality source of classical randomness.

**** As calculated with a single tailed binomial test with null probability equal to 1/3. The threshold of statistical significance at the p < 0.05 level is found at 15/30 and for p < 0.001 we need at least 19/30 correct responses. The best score that any participant managed to obtain was 14/30.

Just the fate of our forward light-cone

Implicit in the picture is that the Hedonium Ball is at the verge of becoming critical (and turn into super-critical hedonium, at around 17 kgs, which leads to runaway re-coherence of the wavefunction reachable, i.e. all of our forward light-cone). The only reason why the ball hasn’t gone critical is because the friendly AI is currently preventing it from doing so. But the AI is at full capacity. If it had a bit more power the AI would completely annihilate the hedonium, since it is a threat to the Coherent Extrapolated Volition (CEV) of the particular human values that led to its creation. More so, the friendly AI would then go ahead and erase the memory of anyone who has ever thought of making hedonium, and change them slightly so that they belong to a society of other people who have been brainwashed to not know anything about philosophical hedonism. They would have deeply fulfilling lives, but would never know of the existence of hyper-valuable states of consciousness.

 
Only you can sort out this stale-mate. The ball and the AI are at such a delicate balance that just throwing a trolley at either will make the other win forever.

 

A Single 3N-Dimensional Universe: Splitting vs. Decoherence

A common way of viewing Everettian quantum mechanics is to say that in an act of measurement, the universe splits into two. There is a world in which the electron has x-spin up, the pointer points to “x-spin up,” and we believe the electron has x-spin up. There is another world in which the electron has x-spin down, the pointer points to “x-spin down,” and we believe the electron has x-spin down. This is why Everettian quantum mechanics is often called “the many worlds interpretation.” Because the contrary pointer readings exist in different universes, no one notices that both are read. This way of interpreting Everettian quantum mechanics raises many metaphysical difficulties. Does the pointer itself split in two? Or are there two numerically distinct pointers? If the whole universe splits into two, doesn’t this wildly violate conservation laws? There is now twice as much energy and momentum in the universe than there was just before the measurement. How plausible is it to say that the entire universe splits?

 

Although this “splitting universes” reading of Everett is popular (Deutsch 1985 speaks this way in describing Everett’s view, a reading originally due to Bryce Dewitt), fortunately, a less puzzling interpretation has been developed. This idea is to read Everett’s theory as he originally intended. Fundamentally, there is no splitting, only the evolution of the wave function according to the Shrödinger dynamics. To make this consistent with experience, it must be the case that there are in the quantum state branches corresponding to what we observe. However, as, for example, David Wallace has argued (2003, 2010), we need not view these branches -indeed, the branching process itself- as fundamental. Rather, these many branches or many worlds are patterns in the one universal quantum state that emerge as the result of its evolution. Wallace, building on work by Simon Saunders (1993), argues that there is a kind of dynamical process; the technical name for this process is “decoherence,” that can ground the emergence of quasi-classical branches within the quantum state. Decoherence is a process that involves an interaction between two systems (one of which may be regarded as a system and the other its environment) in which distinct components of the quantum state come to evolve independently of one another. That this occurs is the result of the wave function’s Hamiltonian, the kind of system it is. A wave function that (due to the kind of state it started out in and the Shrödinger dynamics) exhibits decoherence will enter into states capable of representation as a sum of noninteracting terms in particular basis (e.g., a position basis). When this happens, the system’s dynamics will appear classical from the perspective of the individual branches.

 

[…]

 

Note the facts about the quantum state decohering are not built into the fundamental laws. Rather, this is an accidental fact depending on the kind of state our universe started out in. The existence of these quasi-classical states is not a fundamental fact either, but something that emerges from the complex behavior of the fundamental state. The sense in which there are many worlds in this way of understanding Everettian quantum mechanics is therefore not the same as it is on the more naive approach already described. Fundamentally there is just one universe evolving according to the Schrödinger equation (or whatever is its relativistically appropriate analog). However, because of the special way this one world evolves, and in particular because parts of this world do not interfere with each other and can each on their own ground the existence of quasi-classical macro-objects that look like individual universes, it is correct in this sense to say (nonfundamentally) there are many worlds.

 

[…]

 

As metaphysicians, we are interested in the question of what the world is fundamentally like according to quantum mechanics. Some have argued that the answer these accounts give us (setting aside Bohmian mechanics for the moment) is that fundamentally all one needs to believe in is the wave function. What is the wave function? It is something that as we have already stated may be described as a field on configuration space, a space where each point can be taken to correspond to a configuration of particles, a space that has 3N dimensions where N is the number of particles. So, fundamentally, according to these versions of quantum mechanics (orthodox quantum mechanics, Everettian quantum mechanics, spontaneous collapse theories), all there is fundamentally is a wave function, a field in a high-dimensional configuration space. The view that the wave function is a fundamental object and a real, physical field on configuration space is today referred to as “wave function realism.” The view that such a wave function is everything there is fundamentally is wave function monism.

 

To understand wave function monism, it will be helpful to see how it represents the space on which the wave function is spread. We call this space “configuration space,” as is the norm. However, note that on the view just described, this is not an apt name because what is supposed to be fundamental on this view is the wave function, not particles. So, although the points in this space might correspond in a sense to particle configurations, what this space is fundamentally is not a space of particle configurations. Likewise, although we’ve represented the number of dimensions configuration space has as depending on the number N of particles in a system, this space’s dimensionality should not really be construed as dependent on the number of particles in a system. Nevertheless, the wave function monist need not be an eliminativist about particles. As we have seen, for example, in the Everettian approach, wave function monists can allow that there are particles, derivative entities that emerge out of the decoherent behavior of the wave function over time. Wave function monists favoring other solutions to the measurement problem can also allow that there are particles in this derivative sense. But the reason the configuration space on which the wave function is spread has the number of dimensions it does is not, in the final analysis, that there are particles. This is rather a brute fact about the wave function, and this in turn is what grounds the number of particles there are.

 

The Wave Function: Essays on the Metaphysics of Quantum Mechanics. Edited by Alyssa Ney and David Z Albert (pgs. 33-34, 36-37).