Why I think the Foundational Research Institute should rethink its approach

by Mike Johnson

The following is my considered evaluation of the Foundational Research Institute, circa July 2017. I discuss its goal, where I foresee things going wrong with how it defines suffering, and what it could do to avoid these problems.

TL;DR version: functionalism (“consciousness is the sum-total of the functional properties of our brains”) sounds a lot better than it actually turns out to be in practice. In particular, functionalism makes it impossible to define ethics & suffering in a way that can mediate disagreements.

I. What is the Foundational Research Institute?

The Foundational Research Institute (FRI) is a Berlin-based group that “conducts research on how to best reduce the suffering of sentient beings in the near and far future.” Executive Director Max Daniel introduced them at EA Global Boston as “the only EA organization which at an organizational level has the mission of focusing on reducing s-risk.” S-risks are, according to Daniel, “risks where an adverse outcome would bring about suffering on an astronomical scale, vastly exceeding all suffering that has existed on Earth so far.”

Essentially, FRI wants to become the research arm of suffering-focused ethics, and help prevent artificial general intelligence (AGI) failure-modes which might produce suffering on a cosmic scale.

What I like about FRI:

While I have serious qualms about FRI’s research framework, I think the people behind FRI deserve a lot of credit- they seem to be serious people, working hard to build something good. In particular, I want to give them a shoutout for three things:

  • First, FRI takes suffering seriously, and I think that’s important. When times are good, we tend to forget how tongue-chewingly horrific suffering can be. S-risks seem particularly horrifying.
  • Second, FRI isn’t afraid of being weird. FRI has been working on s-risk research for a few years now, and if people are starting to come around to the idea that s-risks are worth thinking about, much of the credit goes to FRI.
  • Third, I have great personal respect for Brian Tomasik, one of FRI’s co-founders. I’ve found him highly thoughtful, generous in debates, and unfailingly principled. In particular, he’s always willing to bite the bullet and work ideas out to their logical end, even if it involves repugnant conclusions.

What is FRI’s research framework?

FRI believes in analytic functionalism, or what David Chalmers calls “Type-A materialism”. Essentially, what this means is there’s no ’theoretical essence’ to consciousness; rather, consciousness is the sum-total of the functional properties of our brains. Since ‘functional properties’ are rather vague, this means consciousness itself is rather vague, in the same way words like “life,” “justice,” and “virtue” are messy and vague.

Brian suggests that this vagueness means there’s an inherently subjective, perhaps arbitrary element to how we define consciousness:

Analytic functionalism looks for functional processes in the brain that roughly capture what we mean by words like “awareness”, “happy”, etc., in a similar way as a biologist may look for precise properties of replicators that roughly capture what we mean by “life”. Just as there can be room for fuzziness about where exactly to draw the boundaries around “life”, different analytic functionalists may have different opinions about where to define the boundaries of “consciousness” and other mental states. This is why consciousness is “up to us to define”. There’s no hard problem of consciousness for the same reason there’s no hard problem of life: consciousness is just a high-level word that we use to refer to lots of detailed processes, and it doesn’t mean anything in addition to those processes.

Finally, Brian argues that the phenomenology of consciousness is identical with the phenomenology of computation:

I know that I’m conscious. I also know, from neuroscience combined with Occam’s razor, that my consciousness consists only of material operations in my brain — probably mostly patterns of neuronal firing that help process inputs, compute intermediate ideas, and produce behavioral outputs. Thus, I can see that consciousness is just the first-person view of certain kinds of computations — as Eliezer Yudkowsky puts it, “How An Algorithm Feels From Inside“. Consciousness is not something separate from or epiphenomenal to these computations. It is these computations, just from their own perspective of trying to think about themselves.

In other words, consciousness is what minds compute. Consciousness is the collection of input operations, intermediate processing, and output behaviors that an entity performs.

And if consciousness is all these things, so too is suffering. Which means suffering is computational, yet also inherently fuzzy, and at least a bit arbitrary; a leaky high-level reification impossible to speak about accurately, since there’s no formal, objective “ground truth”.

II. Why do I worry about FRI’s research framework?

In short, I think FRI has a worthy goal and good people, but its metaphysics actively prevent making progress toward that goal. The following describes why I think that, drawing heavily on Brian’s writings (of FRI’s researchers, Brian seems the most focused on metaphysics):

Note: FRI is not the only EA organization which holds functionalist views on consciousness; much of the following critique would also apply to e.g. MIRI, FHI, and OpenPhil. I focus on FRI because (1) Brian’s writings on consciousness & functionalism have been hugely influential in the community, and are clear enough *to* criticize; (2) the fact that FRI is particularly clear about what it cares about- suffering- allows a particularly clear critique about what problems it will run into with functionalism; (3) I believe FRI is at the forefront of an important cause area which has not crystallized yet, and I think it’s critically important to get these objections bouncing around this subcommunity.

Objection 1: Motte-and-bailey

Brian: “Consciousness is not a thing which exists ‘out there’ or even a separate property of matter; it’s a definitional category into which we classify minds. ‘Is this digital mind really conscious?’ is analogous to ‘Is a rock that people use to eat on really a table?’ [However,] That consciousness is a cluster in thingspace rather than a concrete property of the world does not make reducing suffering less important.”

The FRI model seems to imply that suffering is ineffable enough such that we can’t have an objective definition, yet sufficiently effable that we can coherently talk and care about it. This attempt to have it both ways seems contradictory, or at least in deep tension.

Indeed, I’d argue that the degree to which you can care about something is proportional to the degree to which you can define it objectively. E.g., If I say that “gnireffus” is literally the most terrible thing in the cosmos, that we should spread gnireffus-focused ethics, and that minimizing g-risks (far-future scenarios which involve large amounts of gnireffus) is a moral imperative, but also that what is and what and isn’t gnireffus is rather subjective with no privileged definition, and it’s impossible to objectively tell if a physical system exhibits gnireffus, you might raise any number of objections. This is not an exact metaphor for FRI’s position, but I worry that FRI’s work leans on the intuition that suffering is real and we can speak coherently about it, to a degree greater than its metaphysics formally allow.

Max Daniel (personal communication) suggests that we’re comfortable with a degree of ineffability in other contexts; “Brian claims that the concept of suffering shares the allegedly problematic properties with the concept of a table. But it seems a stretch to say that the alleged tension is problematic when talking about tables. So why would it be problematic when talking about suffering?” However, if we take the anti-realist view that suffering is ‘merely’ a node in the network of language, we have to live with the consequences of this: that ‘suffering’ will lose meaning as we take it away from the network in which it’s embedded (Wittgenstein). But FRI wants to do exactly this, to speak about suffering in the context of AGIs, simulated brains, even video game characters.

We can be anti-realists about suffering (suffering-is-a-node-in-the-network-of-language), or we can argue that we can talk coherently about suffering in novel contexts (AGIs, mind crime, aliens, and so on), but it seems inherently troublesome to claim we can do both at the same time.

Objection 2: Intuition duels

Two people can agree on FRI’s position that there is no objective fact of the matter about what suffering is (no privileged definition), but this also means they have no way of coming to any consensus on the object-level question of whether something can suffer. This isn’t just an academic point: Brian has written extensively about how he believes non-human animals can and do suffer extensively, whereas Yudkowsky (who holds computationalist views, like Brian) has written about how he’s confident that animals are not conscious and cannot suffer, due to their lack of higher-order reasoning.

And if functionalism is having trouble adjudicating the easy cases of suffering–whether monkeys can suffer, or whether dogs can— it doesn’t have a sliver of a chance at dealing with the upcoming hard cases of suffering: whether a given AGI is suffering, or engaging in mind crime; whether a whole-brain emulation (WBE) or synthetic organism or emergent intelligence that doesn’t have the capacity to tell us how it feels (or that we don’t have the capacity to understand) is suffering; if any aliens that we meet in the future can suffer; whether changing the internal architecture of our qualia reports means we’re also changing our qualia; and so on.

In short, FRI’s theory of consciousness isn’t actually a theory of consciousness at all, since it doesn’t do the thing we need a theory of consciousness to do: adjudicate disagreements in a principled way. Instead, it gives up any claim on the sorts of objective facts which could in principle adjudicate disagreements.

This is a source of friction in EA today, but it’s mitigated by the sense that

(1) The EA pie is growing, so it’s better to ignore disagreements than pick fights;

(2) Disagreements over the definition of suffering don’t really matter yet, since we haven’t gotten into the business of making morally-relevant synthetic beings (that we know of) that might be unable to vocalize their suffering.

If the perception of one or both of these conditions change, the lack of some disagreement-adjudicating theory of suffering will matter quite a lot.

Objection 3: Convergence requires common truth

Mike: “[W]hat makes one definition of consciousness better than another? How should we evaluate them?”

Brian: “Consilience among our feelings of empathy, principles of non-discrimination, understandings of cognitive science, etc. It’s similar to the question of what makes one definition of justice or virtue better than another.”

Brian is hoping that affective neuroscience will slowly converge to accurate views on suffering as more and better data about sentience and pain accumulates. But convergence to truth implies something (objective) driving the convergence- in this way, Brian’s framework still seems to require an objective truth of the matter, even though he disclaims most of the benefits of assuming this.

Objection 4: Assuming that consciousness is a reification produces more confusion, not less

Brian: “Consciousness is not a reified thing; it’s not a physical property of the universe that just exists intrinsically. Rather, instances of consciousness are algorithms that are implemented in specific steps. … Consciousness involves specific things that brains do.”

Brian argues that we treat conscious/phenomenology as more ‘real’ than it is. Traditionally, whenever we’ve discovered something is a leaky reification and shouldn’t be treated as ‘too real’, we’ve been able to break it down into more coherent constituent pieces we can treat as real. Life, for instance, wasn’t due to élan vital but a bundle of self-organizing properties & dynamics which generally co-occur. But carrying out this “de-reification” process on consciousness– enumerating its coherent constituent pieces– has proven difficult, especially if we want to preserve some way to speak cogently about suffering.

Speaking for myself, the more I stared into the depths of functionalism, the less certain everything about moral value became– and arguably, I see the same trajectory in Brian’s work and Luke Muehlhauser’s report. Their model uncertainty has seemingly become larger as they’ve looked into techniques for how to “de-reify” consciousness while preserving some flavor of moral value, not smaller. Brian and Luke seem to interpret this as evidence that moral value is intractably complicated, but this is also consistent with consciousness not being a reification, and instead being a real thing. Trying to “de-reify” something that’s not a reification will produce deep confusion, just as surely trying to treat a reification as ‘more real’ than it actually is will.

Edsger W. Dijkstra famously noted that “The purpose of abstraction is not to be vague, but to create a new semantic level in which one can be absolutely precise.” And so if our ways of talking about moral value fail to ‘carve reality at the joints’- then by all means let’s build better ones, rather than giving up on precision.

Objection 5: The Hard Problem of Consciousness is a red herring

Brian spends a lot of time discussing Chalmers’ “Hard Problem of Consciousness”, i.e. the question of why we’re subjectively conscious, and seems to base at least part of his conclusion on not finding this question compelling— he suggests “There’s no hard problem of consciousness for the same reason there’s no hard problem of life: consciousness is just a high-level word that we use to refer to lots of detailed processes, and it doesn’t mean anything in addition to those processes.” I.e., no ‘why’ is necessary; when we take consciousness and subtract out the details of the brain, we’re left with an empty set.

But I think the “Hard Problem” isn’t helpful as a contrastive centerpiece, since it’s unclear what the problem is, and whether it’s analytic or empirical, a statement about cognition or about physics. At the Qualia Research Institute (QRI), we don’t talk much about the Hard Problem; instead, we talk about Qualia Formalism, or the idea that any phenomenological state can be crisply and precisely represented by some mathematical object. I suspect this would be a better foil for Brian’s work than the Hard Problem.

Objection 6: Mapping to reality

Brian argues that consciousness should be defined at the functional/computational level: given a Turing machine, or neural network, the right ‘code’ will produce consciousness. But the problem is that this doesn’t lead to a theory which can ‘compile’ to physics. Consider the following:

Imagine you have a bag of popcorn. Now shake it. There will exist a certain ad-hoc interpretation of bag-of-popcorn-as-computational-system where you just simulated someone getting tortured, and other interpretations that don’t imply that. Did you torture anyone? If you’re a computationalist, no clear answer exists- you both did, and did not, torture someone. This sounds like a ridiculous edge-case that would never come up in real life, but in reality it comes up all the time, since there is no principled way to *objectively derive* what computation(s) any physical system is performing.

I don’t think this is an outlandish view of functionalism; Brian suggests much the same in How to Interpret a Physical System as a Mind“Physicalist views that directly map from physics to moral value are relatively simple to understand. Functionalism is more complex, because it maps from physics to computations to moral value. Moreover, while physics is real and objective, computations are fictional and ‘observer-relative’ (to use John Searle’s terminology). There’s no objective meaning to ‘the computation that this physical system is implementing’ (unless you’re referring to the specific equations of physics that the system is playing out).”

Gordon McCabe (McCabe 2004) provides a more formal argument to this effect— that precisely mapping between physical processes and (Turing-level) computational processes is inherently impossible— in the context of simulations. First, McCabe notes that:

[T]here is a one-[to-]many correspondence between the logical states [of a computer] and the exact electronic states of computer memory. Although there are bijective mappings between numbers and the logical states of computer memory, there are no bijective mappings between numbers and the exact electronic states of memory.

This lack of an exact bijective mapping means that subjective interpretation necessarily creeps in, and so a computational simulation of a physical system can’t be ‘about’ that system in any rigorous way:

In a computer simulation, the values of the physical quantities possessed by the simulated system are represented by the combined states of multiple bits in computer memory. However, the combined states of multiple bits in computer memory only represent numbers because they are deemed to do so under a numeric interpretation. There are many different interpretations of the combined states of multiple bits in computer memory. If the numbers represented by a digital computer are interpretation-dependent, they cannot be objective physical properties. Hence, there can be no objective relationship between the changing pattern of multiple bit-states in computer memory, and the changing pattern of quantity-values of a simulated physical system.

McCabe concludes that, metaphysically speaking,

A digital computer simulation of a physical system cannot exist as, (does not possess the properties and relationships of), anything else other than a physical process occurring upon the components of a computer. In the contemporary case of an electronic digital computer, a simulation cannot exist as anything else other than an electronic physical process occurring upon the components and circuitry of a computer.

Where does this leave ethics? In Flavors of Computation Are Flavors of Consciousness, Brian notes that “In some sense all I’ve proposed here is to think of different flavors of computation as being various flavors of consciousness. But this still leaves the question: Which flavors of computation matter most? Clearly whatever computations happen when a person is in pain are vastly more important than what’s happening in a brain on a lazy afternoon. How can we capture that difference?”

But if Brian grants the former point- that “There’s no objective meaning to ‘the computation that this physical system is implementing’”– then this latter task of figuring out “which flavors of computation matter most” is provably impossible. There will always be multiple computational (and thus ethical) interpretations of a physical system, with no way to figure out what’s “really” happening. No way to figure out if something is suffering or not. No consilience; not now, not ever.

Note: despite apparently granting the point above, Brian also remarks that:

I should add a note on terminology: All computations occur within physics, so any computation is a physical process. Conversely, any physical process proceeds from input conditions to output conditions in a regular manner and so is a computation. Hence, the set of computations equals the set of physical processes, and where I say “computations” in this piece, one could just as well substitute “physical processes” instead.

This seems to be (1) incorrect, for the reasons I give above, or (2) taking substantial poetic license with these terms, or (3) referring to hypercomputation (which might be able to salvage the metaphor, but would invalidate many of FRI’s conclusions dealing with the computability of suffering on conventional hardware).

This objection may seem esoteric or pedantic, but I think it’s important, and that it ripples through FRI’s theoretical framework with disastrous effects.

 

Objection 7: FRI doesn’t fully bite the bullet on computationalism

Brian suggests that “flavors of computation are flavors of consciousness” and that some computations ‘code’ for suffering. But if we do in fact bite the bullet on this metaphor and place suffering within the realm of computational theory, we need to think in “near mode” and accept all the paradoxes that brings. Scott Aaronson, a noted expert on quantum computing, raises the following objections to functionalism:

I’m guessing that many people in this room side with Dennett, and (not coincidentally, I’d say) also with Everett. I certainly have sympathies in that direction too. In fact, I spent seven or eight years of my life as a Dennett/Everett hardcore believer. But, while I don’t want to talk anyone out of the Dennett/Everett view, I’d like to take you on a tour of what I see as some of the extremely interesting questions that that view leaves unanswered. I’m not talking about “deep questions of meaning,” but about something much more straightforward: what exactly does a computational process have to do to qualify as “conscious”?

There’s this old chestnut, what if each person on earth simulated one neuron of your brain, by passing pieces of paper around. It took them several years just to simulate a single second of your thought processes. Would that bring your subjectivity into being? Would you accept it as a replacement for your current body? If so, then what if your brain were simulated, not neuron-by-neuron, but by a gigantic lookup table? That is, what if there were a huge database, much larger than the observable universe (but let’s not worry about that), that hardwired what your brain’s response was to every sequence of stimuli that your sense-organs could possibly receive. Would that bring about your consciousness? Let’s keep pushing: if it would, would it make a difference if anyone actually consulted the lookup table? Why can’t it bring about your consciousness just by sitting there doing nothing?

To these standard thought experiments, we can add more. Let’s suppose that, purely for error-correction purposes, the computer that’s simulating your brain runs the code three times, and takes the majority vote of the outcomes. Would that bring three “copies” of your consciousness into being? Does it make a difference if the three copies are widely separated in space or time—say, on different planets, or in different centuries? Is it possible that the massive redundancy taking place in your brain right now is bringing multiple copies of you into being?

Maybe my favorite thought experiment along these lines was invented by my former student Andy Drucker.  In the past five years, there’s been a revolution in theoretical cryptography, around something called Fully Homomorphic Encryption (FHE), which was first discovered by Craig Gentry.  What FHE lets you do is to perform arbitrary computations on encrypted data, without ever decrypting the data at any point.  So, to someone with the decryption key, you could be proving theorems, simulating planetary motions, etc.  But to someone without the key, it looks for all the world like you’re just shuffling random strings and producing other random strings as output.

You can probably see where this is going.  What if we homomorphically encrypted a simulation of your brain?  And what if we hid the only copy of the decryption key, let’s say in another galaxy?  Would this computation—which looks to anyone in our galaxy like a reshuffling of gobbledygook—be silently producing your consciousness?

When we consider the possibility of a conscious quantum computer, in some sense we inherit all the previous puzzles about conscious classical computers, but then also add a few new ones.  So, let’s say I run a quantum subroutine that simulates your brain, by applying some unitary transformation U.  But then, of course, I want to “uncompute” to get rid of garbage (and thereby enable interference between different branches), so I apply U-1.  Question: when I apply U-1, does your simulated brain experience the same thoughts and feelings a second time?  Is the second experience “the same as” the first, or does it differ somehow, by virtue of being reversed in time? Or, since U-1U is just a convoluted implementation of the identity function, are there no experiences at all here?

Here’s a better one: many of you have heard of the Vaidman bomb.  This is a famous thought experiment in quantum mechanics where there’s a package, and we’d like to “query” it to find out whether it contains a bomb—but if we query it and there is a bomb, it will explode, killing everyone in the room.  What’s the solution?  Well, suppose we could go into a superposition of querying the bomb and not querying it, with only ε amplitude on querying the bomb, and √(1-ε2) amplitude on not querying it.  And suppose we repeat this over and over—each time, moving ε amplitude onto the “query the bomb” state if there’s no bomb there, but moving ε2 probability onto the “query the bomb” state if there is a bomb (since the explosion decoheres the superposition).  Then after 1/ε repetitions, we’ll have order 1 probability of being in the “query the bomb” state if there’s no bomb.  By contrast, if there is a bomb, then the total probability we’ve ever entered that state is (1/ε)×ε2 = ε.  So, either way, we learn whether there’s a bomb, and the probability that we set the bomb off can be made arbitrarily small.  (Incidentally, this is extremely closely related to how Grover’s algorithm works.)

OK, now how about the Vaidman brain?  We’ve got a quantum subroutine simulating your brain, and we want to ask it a yes-or-no question.  We do so by querying that subroutine with ε amplitude 1/ε times, in such a way that if your answer is “yes,” then we’ve only ever activated the subroutine with total probability ε.  Yet you still manage to communicate your “yes” answer to the outside world.  So, should we say that you were conscious only in the ε fraction of the wavefunction where the simulation happened, or that the entire system was conscious?  (The answer could matter a lot for anthropic purposes.)

To sum up: Brian’s notion that consciousness is the same as computation raises more issues than it solves; in particular, the possibility that if suffering is computable, it may also be uncomputable/reversible, would suggest s-risks aren’t as serious as FRI treats them.

Objection 8: Dangerous combination

Three themes which seem to permeate FRI’s research are:

(1) Suffering is the thing that is bad.

(2) It’s critically important to eliminate badness from the universe.

(3) Suffering is impossible to define objectively, and so we each must define what suffering means for ourselves.

Taken individually, each of these seems reasonable. Pick two, and you’re still okay. Pick all three, though, and you get A Fully General Justification For Anything, based on what is ultimately a subjective/aesthetic call.

Much can be said in FRI’s defense here, and it’s unfair to single them out as risky: in my experience they’ve always brought a very thoughtful, measured, cooperative approach to the table. I would just note that ideas are powerful, and I think theme (3) is especially pernicious if incorrect.

III. QRI’s alternative

Analytic functionalism is essentially a negative hypothesis about consciousness: it’s the argument that there’s no order to be found, no rigor to be had. It obscures this with talk of “function”, which is a red herring it not only doesn’t define, but admits is undefinable. It doesn’t make any positive assertion. Functionalism is skepticism- nothing more, nothing less.

But is it right?

Ultimately, I think these a priori arguments are much like people in the middle ages arguing whether one could ever formalize a Proper System of Alchemy. Such arguments may in many cases hold water, but it’s often difficult to tell good arguments apart from arguments where we’re just cleverly fooling ourselves. In retrospect, the best way to *prove* systematized alchemy was possible was to just go out and *do* it, and invent Chemistry. That’s how I see what we’re doing at QRI with Qualia Formalism: we’re assuming it’s possible to build stuff, and we’re working on building the object-level stuff.

What we’ve built with QRI’s framework

Note: this is a brief, surface-level tour of our research; it will probably be confusing for readers who haven’t dug into our stuff before. Consider this a down-payment on a more substantial introduction.

My most notable work is Principia Qualia, in which I lay out my meta-framework for consciousness (a flavor of dual-aspect monism, with a focus on Qualia Formalism) and put forth the Symmetry Theory of Valence (STV). Essentially, the STV is an argument that much of the apparent complexity of emotional valence is evolutionarily contingent, and if we consider a mathematical object isomorphic to a phenomenological experience, the mathematical property which corresponds to how pleasant it is to be that experience is the object’s symmetry. This implies a bunch of testable predictions and reinterpretations of things like what ‘pleasure centers’ do (Section XI; Section XII). Building on this, I offer the Symmetry Theory of Homeostatic Regulation, which suggests understanding the structure of qualia will translate into knowledge about the structure of human intelligence, and I briefly touch on the idea of Neuroacoustics.

Likewise, my colleague Andrés Gómez Emilsson has written about the likely mathematics of phenomenology, includingThe Hyperbolic Geometry of DMT Experiences, Tyranny of the Intentional Object, and Algorithmic Reduction of Psychedelic States. If I had to suggest one thing to read in all of these links, though, it would be the transcript of his recent talk on Quantifying Bliss, which lays out the world’s first method to objectively measure valence from first principles (via fMRI) using Selen Atasoy’s Connectome Harmonics framework, the Symmetry Theory of Valence, and Andrés’s CDNS model of experience.

These are risky predictions and we don’t yet know if they’re right, but we’re confident that if there is some elegant structure intrinsic to consciousness, as there is in many other parts of the natural world, these are the right kind of risks to take.

I mention all this because I think analytic functionalism- which is to say radical skepticism/eliminativism, the metaphysics of last resort- only looks as good as it does because nobody’s been building out any alternatives.

IV. Closing thoughts

FRI is pursuing a certain research agenda, and QRI is pursuing another, and there’s lots of value in independent explorations of the nature of suffering. I’m glad FRI exists, everybody I’ve interacted with at FRI has been great, I’m happy they’re focusing on s-risks, and I look forward to seeing what they produce in the future.

On the other hand, I worry that nobody’s pushing back on FRI’s metaphysics, which seem to unavoidably lead to the intractable problems I describe above. FRI seems to believe these problems are part of the territory, unavoidable messes that we just have to make philosophical peace with. But I think that functionalism is a bad map, that the metaphysical messes it leads to are much worse than most people realize (fatal to FRI’s mission), and there are other options that avoid these problems (which, to be fair, is not to say they have no problems).

Ultimately, FRI doesn’t owe me a defense of their position. But if they’re open to suggestions on what it would take to convince a skeptic like me that their brand of functionalism is viable, or at least rescuable, I’d offer the following:

Re: Objection 1 (motte-and-bailey), I suggest FRI should be as clear and complete as possible in their basic definition of suffering. In which particular ways is it ineffable/fuzzy, and in which particular ways is it precise? What can we definitely say about suffering, and what can we definitely never determine? Preregistering ontological commitments and methodological possibilities would help guard against FRI’s definition of suffering changing based on context.

Re: Objection 2 (intuition duels), FRI may want to internally “war game” various future scenarios involving AGI, WBE, etc, with one side arguing that a given synthetic (or even extraterrestrial) organism is suffering, and the other side arguing that it isn’t. I’d expect this would help diagnose what sorts of disagreements future theories of suffering will need to adjudicate, and perhaps illuminate implicit ethical intuitions. Sharing the results of these simulated disagreements would also be helpful in making FRI’s reasoning less opaque to outsiders, although making everything transparent could lead to certain strategic disadvantages.

Re: Objection 3 (convergence requires common truth), I’d like FRI to explore exactly what might drive consilience/convergence in theories of suffering, and what precisely makes one theory of suffering better than another, and ideally to evaluate a range of example theories of suffering under these criteria.

Re: Objection 4 (assuming that consciousness is a reification produces more confusion, not less), I would love to see a historical treatment of reification: lists of reifications which were later dissolved (e.g., élan vital), vs scattered phenomena that were later unified (e.g., electromagnetism). What patterns do the former have, vs the latter, and why might consciousness fit one of these buckets better than the other?

Re: Objection 5 (the Hard Problem of Consciousness is a red herring), I’d like to see a more detailed treatment of what kinds of problem people have interpreted the Hard Problem as, and also more analysis on the prospects of Qualia Formalism (which I think is the maximally-empirical, maximally-charitable interpretation of the Hard Problem). It would be helpful for us, in particular, if FRI preregistered their expectations about QRI’s predictions, and their view of the relative evidence strength of each of our predictions.

Re: Objection 6 (mapping to reality), this is perhaps the heart of most of our disagreement. From Brian’s quotes, he seems split on this issue; I’d like clarification about whether he believes we can ever precisely/objectively map specific computations to specific physical systems, and vice-versa. And if so— how? If not, this seems to propagate through FRI’s ethical framework in a disastrous way, since anyone can argue that any physical system does, or does not, ‘code’ for massive suffering, and there’s no principled way to derive any ‘ground truth’ or even pick between interpretations in a principled way (e.g. my popcorn example). If this isn’t the case— why not?

Brian has suggested that “certain high-level interpretations of physical systems are more ‘natural’ and useful than others” (personal communication); I agree, and would encourage FRI to explore systematizing this.

It would be non-trivial to port FRI’s theories and computational intuitions to the framework of “hypercomputation”– i.e., the understanding that there’s a formal hierarchy of computational systems, and that Turing machines are only one level of many– but it may have benefits too. Namely, it might be the only way they could avoid Objection 6 (which I think is a fatal objection) while still allowing them to speak about computation & consciousness in the same breath. I think FRI should look at this and see if it makes sense to them.

Re: Objection 7 (FRI doesn’t fully bite the bullet on computationalism), I’d like to see responses to Aaronson’s aforementioned thought experiments.

Re: Objection 8 (dangerous combination), I’d like to see a clarification about why my interpretation is unreasonable (as it very well may be!).

 


In conclusion- I think FRI has a critically important goal- reduction of suffering & s-risk. However, I also think FRI has painted itself into a corner by explicitly disallowing a clear, disagreement-mediating definition for what these things are. I look forward to further work in this field.

 

Mike Johnson

Qualia Research Institute


Acknowledgements: thanks to Andrés Gómez Emilsson, Brian Tomasik, and Max Daniel for reviewing earlier drafts of this.

Sources:

My sources for FRI’s views on consciousness:

Flavors of Computation are Flavors of Consciousness:

https://foundational-research.org/flavors-of-computation-are-flavors-of-consciousness/

 

Is There a Hard Problem of Consciousness?

http://reducing-suffering.org/hard-problem-consciousness/

Consciousness Is a Process, Not a Moment

http://reducing-suffering.org/consciousness-is-a-process-not-a-moment/

 

How to Interpret a Physical System as a Mind

http://reducing-suffering.org/interpret-physical-system-mind/

 

Dissolving Confusion about Consciousness

http://reducing-suffering.org/dissolving-confusion-about-consciousness/

 

Debate between Brian & Mike on consciousness:

https://www.facebook.com/groups/effective.altruists/permalink/1333798200009867/?comment_id=1333823816673972&comment_tracking=%7B%22tn%22%3A%22R9%22%7D

Max Daniel’s EA Global Boston 2017 talk on s-risks:
https://www.youtube.com/watch?v=jiZxEJcFExc
Multipolar debate between Eliezer Yudkowsky and various rationalists about animal suffering:
https://rationalconspiracy.com/2015/12/16/a-debate-on-animal-consciousness/
The Internet Encyclopedia of Philosophy on functionalism:
http://www.iep.utm.edu/functism/
Gordon McCabe on why computation doesn’t map to physics:
http://philsci-archive.pitt.edu/1891/1/UniverseCreationComputer.pdf
Toby Ord on hypercomputation, and how it differs from Turing’s work:
https://arxiv.org/abs/math/0209332
Luke Muehlhauser’s OpenPhil-funded report on consciousness and moral patienthood:
http://www.openphilanthropy.org/2017-report-consciousness-and-moral-patienthood
Scott Aaronson’s thought experiments on computationalism:
http://www.scottaaronson.com/blog/?p=1951
Selen Atasoy on Connectome Harmonics, a new way to understand brain activity:
https://www.nature.com/articles/ncomms10340
My work on formalizing phenomenology:

My meta-framework for consciousness, including the Symmetry Theory of Valence:

http://opentheory.net/PrincipiaQualia.pdf

 

My hypothesis of homeostatic regulation, which touches on why we seek out pleasure:

http://opentheory.net/2017/05/why-we-seek-out-pleasure-the-symmetry-theory-of-homeostatic-regulation/

 

My exploration & parametrization of the ‘neuroacoustics’ metaphor suggested by Atasoy’s work:

http://opentheory.net/2017/06/taking-brain-waves-seriously-neuroacoustics/

My colleague Andrés’s work on formalizing phenomenology:
A model of DMT-trip-as-hyperbolic-experience:
https://qualiacomputing.com/2017/05/28/eli5-the-hyperbolic-geometry-of-dmt-experiences/
June 2017 talk at Consciousness Hacking, describing a theory and experiment to predict people’s valence from fMRI data:
https://qualiacomputing.com/2017/06/18/quantifying-bliss-talk-summary/
A parametrization of various psychedelic states as operators in qualia space:
https://qualiacomputing.com/2016/06/20/algorithmic-reduction-of-psychedelic-states/
A brief post on valence and the fundamental attribution error:
https://qualiacomputing.com/2016/11/19/the-tyranny-of-the-intentional-object/
A summary of some of Selen Atasoy’s current work on Connectome Harmonics:
https://qualiacomputing.com/2017/06/18/connectome-specific-harmonic-waves-on-lsd/

Quantifying Bliss: Talk Summary

Below I provide a summary of the Quantifying Bliss talk at Consciousness Hacking (video360 degree live feed record), which took place on June 7th 2017. I am currently working on a longer and more precise treatment of the topic, which I will be posting here as well. That said, since the talk already makes clear, empirically testable predictions, I decided to publish this summary as soon as possible. After all, there is only a small window of opportunity to publish one’s testable predictions online before the experiment is run and they turn into “retrodictions”. By writing this out and archiving it on time I’m enabling future-me to say “called it!” (if the results are positive) or “at least I tried” (if the experiment fails to show the predicted effects). Better do this quick, then, for science!

The Purpose of Life

We begin by asking the question “what is the purpose of life?”. In order to give a sense for where I am coming from, I explain that I think that the purpose of life is…

  1. To Understand the Universe
  2. To be Happy, and Make Others Happy

I admit that for the first half of my life I thought that the only purpose of life was to understand the universe. If anything, in light of this exclusive goal, happiness could be seen as a temporary distraction rather than something to pursue for its own sake. Thankfully, as a teenager I was exposed to philosophy of mind, was introduced to meditation, and experimented with psychedelics, all of which pointed me to the fact that (a) we don’t understand consciousness yet, and (b) happiness is really a lot more important than we usually think, even if one is only concerned with the most theoretical and abstract level of understanding possible.

I now regard “to understand the universe” and “to be happy and make others happy” on an equal footing. More so, these two life goals complement each other. On the one hand, understanding the universe will allow you to figure out how to make anyone happy. And on the other hand, being happy and making others happy can allow you to stay motivated in order to figure out the nature of reality. Hence one can think of these two life goals as synergistic rather than as being in opposing camps (of course, at the edges, one will be forced to choose one over the other, but we are nowhere near the point where this is a concern).

By taking these two “purposes of life” seriously we are then faced with a crucial question: What makes an experience valuable? In other words, for someone who is both trying to understand the universe and trying to make its inhabitants as happy as possible, the question “how do you measure the value of an experience?” becomes important.

At Qualia Computing we generally answer that question using the following criteria

  1. Does it feel good? (happy, loving, pleasant)
  2. Does it make you productive (in a good way)?
  3. Does it make you ethical?

That is to say, the value that we assign to an experience is guided by three criteria. In brief, a valuable experience is one that feels good (i.e. has positive hedonic tone), improves your productivity (in the sense of helping you pursue your own values effectively), and makes you more ethical – both towards yourself and others. That said, for the purpose of this talk, I make it explicit that I will only discuss how to measure (1). In other words, we will concern ourselves with what makes an experience feel good; ethics and productivity are discussed elsewhere.*

What is Bliss?

So what makes an experience feel good? The “feel good” quality of an experience is usually called valence in psychology and neuroscience (also described as the “pleasure-pain axis”). This quality is to be distinguished from arousal, which refers to the amount of energy expressed in an experience. Four examples: Excitement is a high-valence, high-arousal state. Serenity is a high-valence, low-arousal state. Anxiety is low-valence, high-arousal. And depression low-valence, low-arousal.

For some people valence and arousal are correlated (either negatively or positively as shown by Peter Kuppens). Likewise, one’s culture can have a large influence on the way one conceptualizes of valence (or ideal affect, as demonstrated in the extensive work of Jeanne Tsai). That said, valence is not a cultural phenomenon; even mice can experience negative and positive valence.

Even though valence and arousal do seem to explain a big chunk of the differences between emotions, we can nonetheless find many cases where the “texture” of two emotions feel very different even though their valence and their arousal are similar. Hence we ask ourselves: How do we explain and characterize the textural differences between such emotions?

And across all of the possible intensely blissful states on offer (encompassing all of the possible inner meanings present), what exactly is shared between them all at their very core?

Some interpret holistic feelings of wellbeing as a sort of spiritual signal. In this interpretation, feeling at a very deep level that the world is good, that things fall into place perfectly, that you don’t owe anything to anyone, etc. is a sign that you are on the right (spiritual) track. Undoubtedly many people use the (often extreme) positive shift in their valence upon religious conversion as evidence of the validity of their choice. Intense positive valence may not throw Bayesian purists off-balance, but for the rest of the world, blissful experiences are often found as cornerstones of worldviews.

Other people say that bliss is “just chemicals in your brain”. Some claim that it’s more a matter of the functional state of your pleasure centers (themselves affected by dopamine, opioids, etc.) rather than the chemicals themselves. Many others are focused on what usually triggers happiness (e.g. learning, relationships, beliefs, etc.) rather than on what, absolutely, needs to happens for bliss to take place in the simplest experiential terms possible. Most who study this closely become mystics.

Could it be that there’s something structural that makes the experiences feel good? Let’s say that there exists a good-fitting mathematical object that translates brain states to experiences. What mathematical property of that object would valence look like? Our proposal is very simple. In some sense it is the simplest possible theory for the important theory of consciousness. We propose the symmetry theory of valence.

(The important theory of consciousness is the question that asks why experience feels good and/or bad, vs. e.g. the hard problem of consciousness, why consciousness exists to begin with).

The Symmetry Theory of Valence

We are pretty confident that consciousness is a real and a measurable phenomenon. That’s why Consciousness Hacking is such a good venue for this kind of discussion. Because here we can talk freely about the properties of consciousness without getting caught up about whether it exists at all. Now, symmetry is a very general term, how is that precise?

Harmony feels good because it’s symmetry over time. In reality, our moments of experience contain a temporal direction. I call this a pseudo-time arrow, since its direction is likely encoded in the patterns of statistical independence between the qualia experienced. And by manipulating the symmetrical connectivity of the micro-structure of one’s consciousness, one can change the perception of time. It’s a change in the way one evaluates when one is and how fast one is going. 

In this model, the pleasure centers would work as “tuning knobs” of harmonic patterns. They are establishing the mood, the underlying tone to which the rest needs to adapt. And the emotional centers, including the amygdala, would be strategically positioned to add anti-symmetry instead. Hence, in this framework we would think of boredom is an “anti-symmetry” mechanism. It prevents us from getting stuck in shallow ponds, but it can be nasty if left unchecked. Cognitive activity may be in part explained by differences in boredom thresholds.

Connectome-Specific Harmonics

I was at the Psychedelic Science 2017 conference when I saw Selen Atasoy presenting about improvisation enhancement with LSD. She used a paradigm previously developed, whose methods and empirical tests were published in Nature in 2016 but now applied to psychedelic research. For a good introduction check out the partial transcript of her talk.

In her talk she shows how one can measure the various “pure harmonics” in a given brain. The core idea is that brain activity can be interpreted as a weighted sum of “natural resonant frequencies” for the entire connectome (white matter tracks together with the grey matter connections). They actually take the physical structure of a mapped brain and simulate the effect of applying the excitation-inhibition differential equations known for collective neural activity propagation. Then they infer the presence and prevalence of these “pure harmonics” in a brain at a given point in time using a probabilistic reconstruction.

Chladni plates here are a wonderful metaphor for these brain harmonics. This is because the way the excitation-inhibition wavefront propagates is very similar in both Chladni plates and human brains. In both cases the system drifts slowly within the attractor basin of natural frequencies, where the wavefront wraps around the medium an integer number of times. I was in awe to see her approach applied to psychedelic research. After all, Qualia Computing has indeed explored harmonic patterns in psychedelic experiences (ex. 1, ex. 2, ex. 3), and the connection was made explicit in Principia Qualia (via the concept of neuroacoustic modulation).

giphy-downsized-large

But how do these harmonics look like in the brain? Show me a brain!

 

Notice the traveling wave wrapping around the brain an integer number of times in each of these numerical solutions (source). The work by these labs is incredible, and they seem to show that the brain’s activity can be decomposed into each of these harmonics.

At the Psychedelic Science 2017 conference, Selen Atasoy explained that very low frequency harmonics were associated with Ego Dissolution in the trials that they studied. She also explained that emotional arousal, here defined as one’s overall level of energy in the emotional component (i.e. anxiety and ecstasy vs. depression and serenity), also correlated with low frequency harmonic states. On the other hand, high valence states were correlated with high frequency brain harmonics.

These empirical results are things that I claim we could have predicted with the symmetry theory of valence. I then thought to myself: let’s try to come up with other predictions. How should we consider the mixture of various harmonics, beyond merely their individual presence? How can we reconstruct valence from this novel data-structure for representing brain-states?

The Algorithm for Quantifying Bliss

Starting my reasoning from first principles (sourced from the Symmetry Theory of Valence), the natural way to take a data-structure that represents states of consciousness and recover its valence (in cases where samples occur across time in addition to space), is to try to isolate the noise, then proceed to quantify the dissonance, and what remains becomes what’s consonant. Basically, one will estimate the rough amount of symmetry (over time), as well as the degree of anti-symmetry, and the level of noise total.

In other words, I prophesize that we can get an “affective signature” of any brain state by applying an algorithm to fMRI brain recordings in order to estimate the degree of (1) consonance, (2) dissonance, and (3) noise within and across the brain’s natural harmonic states. This will result in what I call “Consonance-Dissonance-Noise Signatures” of brain states (“CDNS” for short) consisting of three histograms that describe the spectra of consonance, dissonance, and noise in a given moment of experience. The algorithm to arrive at a CDNS of a brain state is as follows:

Remove some of the noise in the brain state by applying the technique in Atasoy (2016) and recovering the distribution of the best approximation possible for the harmonics present (you may apply some further denoising on the harmonics when taken as a collective). Then estimate the total dissonance of the combination of harmonics by taking each pair of harmonics and quantifying their mutual dissonance. Finally, subtract the dissonance from “all of the interactions that could have existed” and what’s left ends up being the consonance. This way you obtain a Consonance, Dissonance, Noise Signature.

 

Each of these three components will have their associated spectral power distribution. The noise spectrum is obtained during the first denoising step (as whatever cannot be explained by the harmonic decomposition). Then the dissonance spectrum is a function of the minimum power of pairs of harmonics that exist within the critical band of each other (see slides 18; possibly upgraded by 20), as well as the frequencies of the beating patterns.

Quantifying Dissonance?

In order to quantify dissonance we use a method that may end up being simpler than what you need to calculate dissonance for sound! E.g. in Quantifying the Consonance of Complex Tones With Missing Fundamentals (Chon 2008) we learn that the human auditory system may at times detect dissonance even when there is no actual dissonance in the input. That is, there are auditory illusions pertaining to valence and dissonance. Based on the missing fundamental one can create ghost dissonance between tones that are not even present. That said, quantifying dissonance in a brain in terms of its harmonic decomposition may be easier than quantifying dissonance in auditory input, precisely because the auditory input (and any sensory input for that matter) contains many intermediary pre-processing steps. The auditory system is relatively “direct” when compared to, e.g. the visual system, but you will still see some basic signal processing done to the input before it influences brain harmonics. The sensory systems, being adapted to meet the criteria of both interfacing with a functioning valence system and representing the information adequately (in terms of the real-world distribution of inputs) serve the function of translating the inputs into usable signals. I.e. frequency-based descriptions, often log-transformed, in order to arrive at valence gradients. For this reason, the algorithm that describes how to extract valence out of a brain state may turn out to be simpler than what you need to predict the hedonic quality of patterns of sound (or sight, touch, etc).

In brief, we propose that we can compute the approximate amount of dissonance between these harmonics by seeing how close they are in terms of spatial and temporal frequencies. If they are within the critical window then they will be considered as dissonant. There is likely to be a peak dissonance window, and when any pair of harmonic states live within that window, then experiencing both at once may feel really awful (to quantify such dissonance more precisely we would use a dissonance function as shown in Chon 2008). If indeed symmetry is intimately connected to valence, then highly anti-symmetrical states such as what’s produced by overlapping brain harmonics within the critical band may feel terrible. Remember, harmony is symmetry over time. So dissonance is anti-symmetry over time. It’s worth recalling, though, that in the absence of dissonance and noise, by default, what remains is consonance.

Visualizing Emotions as CDNS’s of States of Consciousness

Above you can find two ways of visualizing a CDNS. Before we go on to the predictions, here we illustrate how we think that we will be able to see at a glance the valence of a brain with our method. The big circle shows the dissonance and consonance for each of the brain harmonics (the black dots surrounding the circle represent the weights for each state). If you want the overall dissonance in a given state, you add up the red-yellow arrows, whereas if you want the total consonance, you add the purple-light-blue arrows. The triangles on the right expand upon the valence diagram presented in Principia Qualia. Namely, we have a blue (positive valence/consonant), red (negative valence/dissonant), and grey (neutral valence/noise) component in a state of consciousness. Each of these components has a spectrum; the myriad textures of emotional states are the result of different spectral signatures for hedonically loaded patterns.

Testable Predictions

Quantifying Bliss (27)

We predict that intense emotions/experiences reported on psychedelics will result in states of consciousness whose harmonic decomposition will show a high amount of energy to be found in the pure harmonics (this was already found in 2017 as explained in the presentation, so let’s count that as a retrodiction). People who report being “very high” will have particularly high amounts of energy in their pure harmonics (as opposed to more noisy states).

The predicted valence for their experiences will be a function of the particular patterns (in terms of relative weights) of the various harmonics. Those which generate highly harmonic CDNS will be blessed with high valence experiences. And those who experience high dissonance, as empirically measured, will report negative feelings (e.g. fear, anxiety, nausea, weird and unpleasant body load, etc). In particular, we can explore the shape of highly harmonic states. In this framework, MDMA would be seen as likely to work by increasing the energy expressed by an exceptionally consonant set of harmonics in the brain.

A point to make here is that predicting “pure harmonics” on psychedelics (evidently simple and ordered patterns), would seem to go counter to the recently accrued empirical data concerning entropy in the tripping brain.** But we also know that the psychedelic brain can produce ridiculously self-similar near-informationless yet highly intense moments of experience preceded by a symmetrification process. Indeed, there are several symmetric attractors for the interplay of awareness and attention at various levels of “consciousness energy” and quality of mood. These states, in turn, not only are hedonically charged, but also allow the exploration of high-energy qualia research (since the implicit symmetry provides an energy seal). Highly energetic states of consciousness can be encapsulated in a highly symmetrical network of local binding. More about this in a future article.

On the other hand, we predict that people on SSRIs will show an enhanced amount of noise in their CDNS. A couple of slides back, this was represented as a higher loading of activity in the grey component of the triangular visualization of a CDNS. Likewise, some drugs will have various effects on the CDNS, such as stimulants inducing more consonance in high frequencies, whereas opioids and hypnotics having signatures of inducing high consonance in the low frequencies.

Summary of Predictions About Drug Effects

  1. Psychedelic substances will increase the overall power of the brain’s pure harmonics, and thus result in a CDN Signature characterized by: (a) high consonance of all frequencies, (b) high dissonance of all frequencies, and (c) low noise of all frequencies. Criticality will be observed by way of the CDNS having high variance.
  2. MDMA will produce a very specific range of states that have on the one hand very pure harmonic states of high frequencies, and on the other, very small collective dissonance and noise. In other words: (a) high amounts of high-frequency consonance, (b) low amounts of dissonance of all frequencies, and (c) low noise of all frequencies.
  3. Any “affect blunting” agent such as SSRIs, ibuprofen, aspirin, acetaminophen, and agmatine, will produce CDNS characterized by: (a) reduced consonance of all frequencies, (b) reduced dissonance of all frequencies, and (c) increased noise in either some or all frequencies. We further hypothesize that different antidepressants (e.g. citalopram vs. fuoxetine) will look the same with respect to reducing the C and D components, but may have differences in the way they increase the N spectrum.
  4. Opioids in euphoric doses will be found to (a) increase low frequency consonance, (b) decrease dissonance for all frequencies but especially the high frequencies, and (c) slightly increase noise across the board.
  5. Stimulants will be found to (a) increase medium and high frequency consonance, (b) leave dissonance fairly unaltered, and (c) reduce noise for all frequencies but especially those in the upper end of the spectrum.

Predictions About Emotions

For now, here are the specific predictions concerning emotions that I am making:

  1. The energy of the consonant (C) component of a CDNS will be highly correlated with the amount of euphoria (pleasure, happiness, positive feelings, etc.) a person is experiencing.
  2. The energy of the dissonant (D) component will have a high correlation with the amount of dysphoria (pain, suffering, negative feelings, etc.) a person feels.
  3. The energy of the noise (N) component will be correlated with flattened affect and blunted valence (i.e. feeling neither good nor bad, like there is a fog that masks all feelings).
  4. If one creates a geometric representation of the relationships between various brain states using their respective CDNS similarities as a distance metric for emotional states using Multi-Dimensional Scaling (MDS) techniques, one will be able to recover a really good approximation of the empirically-derived dimensional models of emotions (cf. dimensional models of emotionWire-heading Done Right). In other words, if you ask your participants to tell you how they feel during the fMRI sessions and then associate those emotions to their instantaneous CDNS, and then you apply multidimensional scaling to the resulting CDNS, you will be able to recover a good dimensional picture of the state-space of emotions. I.e. “subjective similarity between emotions” will be closely tracked by the geometric distance between their corresponding CDNS:
    1. Applying MDS scaling to the C component of the CDNS will result in a better characterization of the differences between positive emotions.
    2. Applying MDS to the D component will result in a better characterization of the differences between negative emotions. And,
    3. Applying MDS to the N component will result in a better characterization of the differences between valence-neutral emotions.

The Future of Mental Health

Quantifying Bliss (28)

Sir, your 17th harmonic is really messing up the consonance of your 19th harmonic, and it interrupts the creative morning mood you recently enjoyed. I suggest taking 1mg of Coluracetam, listening to a selection of Diamond songs, and RD23 [stretching exercise]. Here’s your expected CDNS.

Penfield mood organs may not be as terrible as they seem. At least not if you’re given a good combination of personalized settings, and a manual to wire-head in the proper manner. In such a situation, among the options available, you will have the ability to choose an experientially attractive, healthy, and sustainable set of moods indefinitely.

The “clinical phenomenologist” of the year 2050 might look into your brain harmonics, and try to find the shortest paths to nearby state-spaces with less chronic dissonance, fishing for high-consonance attractors with large basins to shoot for. The qualia expert would go on to provide you various options that may improve all sorts of metrics, including valence, the most important of them all. If you ask, your phenomenologist can give you trials for fully reversible treatments. You sample them in your own time, of course, and test them for a day or two before deciding whether to use these moods for longer.

Personalized Harmonic Retuning

I assume that people will be given just about enough retuning to get back to their daily routines as they themselves prefer them, but without any sort of nagging dissonance. Most people will probably continue on with their preference architectures relatively unchanged. Indeed, that will be a valued quality for a personalized harmonic retuning product. Having adequate mood devices that don’t mess up your existing value system might eventually become a highly understood, precision-engineered aspect of mainstream mental health. At least compared to the current (pre-psychedelic re-adoption 2017) paradigms. Arguably, even psychedelic therapy is pretty blunt in a way. Not in the sense of blunting the hedonic quality of your experience (on the contrary). But in the sense of applying the harmonization process indiscriminately.

For the psychonauts (hopefully they are not too rare by then), who still want to investigate consciousness even though human life is already full of love (in the future), we will have a different arrangement. They are free to explore themselves while being part of a research institute. Indeed, pursuing the purpose of understanding the big picture (including consciousness) will require the experimental method. More so, exploring the state-space of consciousness will, for the foreseeable future, be a way to find new ways of making others happy. People will continue to explore alien state-spaces in the search of highly-priced high-valence states. At least for some scores of generations valence engineering is bound to continue to be economically profitable. As we discover new drugs, new treatments, new philosophical trances, new interpretations and expressions of love, and so on, the economy will adapt to these inventions. We already live in an informational economy of states of consciousness, and the future is likely to be like that as well. Except that consciousness technologies will be immensely more powerful.

Barring the unlikely emergence of anti-hedonist Spartan self-punishing transhumanist social movements enabled with genetic technology, I don’t anticipate major obstacles in the eventual widespread use of mood organs. In fact, the wide adoption of SSRIs in some pockets of society shows that the general public is willing and interested in minor self-adjustments to deal with chronic negativity. Hedonic technology is in its early days, but with a root understanding of the nature of valence, the sky is the limit.

Case studies – SSRIs & Psychedelics

Let’s take a closer look at SSRIs and psychedelics in light of the Symmetry Theory of Valence.

SSRIs have an overall effect of blunting one’s experience at pretty much every level imaginable. Usually just a little, enough to help people re-establish a new order between their harmonics, in a more noisy, less intense range of moods. Some people may benefit from this sort of intervention. Now, also it’s worth pointing out the possible side effects, which have the common theme of reducing the structural integrity of the micro-structure of consciousness. Thus, the highly ordered pleasant and unpleasant experiences get softened. Whether this generalized softening is beneficial depends on many factors. Psychonauts usually avoid them as much as possible in order to protect the psychoacoustical potential of their brain, were they to desire to use this potential sometime in the future.

Psychedelics, in this framework, would be interpreted as neuroacoustic enhancers. These agents trigger, via control interruption, a more “echo-ey acoustic environment for one’s consciousness”. Meaning, any qualia experienced under the influence lasts for longer (the decay of intensity of experience as a function of time since presentation of stimuli becomes a lot “slower” or “fatter”). On high doses, the intensity of each component of a cycle of an experience can feel just as intense, and thus one might find oneself unable to locate oneself in time. Sometimes intense feelings return cyclically, and ultimately at strong doses, experiential feedback dominates every aspect of one’s experience, and there isn’t anything other than standing waves of synesthetic psychedelic feelings.

Peak symmetry states with their associated valence would be predicted to be far more accessible on highly harmonic states of consciousness. So psychedelics and the like could be carefully used to explore the positive extreme of valence: Hyper-symmetrical states. That said, for responsible exploration, a euphoriant will be needed to prevent negative psychedelic experiences.

Final Thoughts

A Harmonic Society is a place where everyone recognizes what makes other sentient beings love life. It’s a place in which everyone deeply understands the valence landscapes of other beings. People in such a society would know that a zebra, an owl, and a salamander all share the pursuit of harmonic states of consciousness, albeit in their own, often different-looking, state-spaces of qualia. We would understand each other far more deeply if we saw each other’s valence landscapes as part of a big state-space of possible preference architectures. Ultimately, the pursuit of existential bliss and the ontological question (why being?) would incite us to explore each other through consciousness technologies. We will have an expanded state-space of available possible moods, both individual and collective, increasing our chances of finding a new revolutionary understanding of consciousness, identity, and what’s possible for post-hedonium societies.


*I will note that to define what’s ethical one ultimately relies on beliefs about personal identity; truly frame-independent systems of morality are exceptionally hard to construct.

**The Entropic Brain theory portrays psychedelia in terms of increased entropy, but also, and most importantly, focuses on criticality. Just thinking about entropy would not distinguish between adding white noise and adding interesting patterns. In other words, from the point of view of simple entropy without any spectral (or nonlinear) analysis, SSRIs and psychedelics are doing pretty much the same thing. So the sense of “entropy” that matters will have to be a lot more detailed, showing you in what way the information encoded in normal states of consciousness changes as a function of entropy added in various ways.

On psychedelics one does indeed find highly ordered crystal-like states of consciousness (which I’ve described elsewhere as peak symmetry states), and as far as we know those states are also some of the most positively hedonically charged. Hence, at least in terms of describing the quality of the psychedelic experience, leaving symmetry out would make us miss an important big-picture kind of quality for psychedelics in general and their connection to valence variance.

 

***→ see quote →

My hypothesis strongly implies that ‘hedonic’ brain regions influence mood by virtue of acting as ‘tuning knobs’ for symmetry/harmony in the brain’s consciousness centers. Likewise, nociceptors, and the brain regions which gate & interpret their signals, will be located at critical points in brain networks, able to cause large amounts of salience-inducing antisymmetry very efficiently. We should also expect rhythm to be a powerful tool for modeling brain dynamics involving valence- for instance, we should be able to extend (Safron 2016)’s model of rhythmic entrainment in orgasm to other sorts of pleasure.

– Michael Johnson in Principia Qualia, page 52

ELI5 “The Hyperbolic Geometry of DMT Experiences”

4572411

I wrote the following in response to a comment on the r/RationalPsychonaut subreddit about this DMT article I wrote some time ago. The comment in question was: “Can somebody eli5 [explain like I am 5 years old] this for me?” So here is my attempt (more like “eli12”, but anyways):

In order to explain the core idea of the article I need to convey the main takeaways of the following four things:

  1. Differential geometry,
  2. How it relates to symmetry,
  3. How it applies to experience, and
  4. How the effects of DMT turn out to be explained (in part) by changes in the curvature of one’s experience of space (what we call “phenomenal space”).

1) Differential Geometry

If you are an ant on a ball, it may seem like you live on a “flat surface”. However, let’s imagine you do the following: You advance one centimeter in one direction, you turn 90 degrees and walk another centimeter, turn 90 degrees again and advance yet another centimeter. Logically, you just “traced three edges of a square” so you cannot be in the same place from which you departed. But let’s says that you somehow do happen to arrive at the same place. What happened? Well, it turns out the world in which you are walking is not quite flat! It’s very flat from your point of view, but overall it is a sphere! So you ARE able to walk along a triangle that happens to have three 90 degree corners.

That’s what we call a “positively curved space”. There the angles of triangles add up to more than 180 degrees. In flat spaces they add up to 180. And in “negatively curved spaces” (i.e. “negative Gaussian curvature” as talked about in the article) they add up to less than 180 degrees.

oct

Eight 90-degree triangles on the surface of a sphere

So let’s go back to the ant again. Now imagine that you are walking on some surface that, again, looks flat from your restricted point of view. You walk one centimeter, then turn 90 degrees, then walk another, turn 90 degrees, etc. for a total of, say, 5 times. And somehow you arrive at the same point! So now you traced a pentagon with 90 degree corners. How is that possible? The answer is that you are now in a “negatively curved space”, a kind of surface that in mathematics is called “hyperbolic”. Of course it sounds impossible that this could happen in real life. But the truth is that there are many hyperbolic surfaces that you can encounter in your daily life. Just to give an example, kale is a highly hyperbolic 2D surface (“H2” for short). It’s crumbly and very curved. So an ant might actually be able to walk along a regular pentagon with 90-degree corners if it’s walking on kale (cf. Too Many Triangles).

kale

An ant walking on kale may infer that the world is an H2 space.

In brief, hyperbolic geometry is the study of spaces that have this quality of negative curvature. Now, how is this related to symmetry?

2) How it relates to symmetry

As mentioned, on the surface of a sphere you can find triangles with 90 degree corners. In fact, you can partition the surface of a sphere into 8 regular triangles, each with 90 degree corners. Now, there are also other ways of partitioning the surface of a sphere with regular shapes (“regular” in the sense that every edge has the same length, and every corner has the same angle). But the number of ways to do it is not infinite. After all, there’s only a handful of regular polyhedra (which, when “inflated”, are equivalent to the ways of partitioning the surface of a sphere in regular ways).

Weinberg-1-102711_jpg_630x171_crop_q85

If you instead want to partition a plane in a regular way with geometric shapes, you don’t have many options. You can partition it using triangles, squares, and hexagons. And in all of those cases, the angles on each of the vertices will add up to 360 degrees (e.g. six triangles, four squares, or thee corners of hexagons meeting at a point). I won’t get into Wallpaper groups, but suffice it to say that there are also a limited number of ways of breaking down a flat surface using symmetry elements (such as reflections, rotations, etc.).

tiling2

Regular tilings of 2D flat space

Hyperbolic 2D surfaces can be partitioned in regular ways in an infinite number of ways! This is because we no longer have the constraints imposed by flat (or spherical) geometries where the angles of shapes must add up to a certain number of degrees. As mentioned, in hyperbolic surfaces the corners of triangles add up to less than 180 degrees, so you can fit more than 6 corners of equilateral triangles at one point (and depending on the curvature of the space, you can fit up to an infinite number of them). Likewise, you can tessellate the entire hyperbolic plane with heptagons.

61b801a08a7ef2780e96e0a574b82136

Hyperbolic tiling: Each of the heptagons is just as big (i.e. this is a projection of the real thing)

On the flip side, if you see a regular partitioning of a surface, you can infer what its curvature is! For example, if you see that a surface is entirely covered with heptagons, three on each of the corners, you can be sure that you are seeing a hyperbolic surface. And if you see a surface covered with triangles such that there are only four triangles on each joint, then you know you are seeing a spherical surface. So if you train yourself to notice and count these properties in regular patterns, you will indirectly also be able to determine whether the patterns inhabit a spherical, flat, or hyperbolic space!

3) How it applies to experience

How does this apply to experience? Well, in sober states of consciousness one is usually restricted to seeing and imagining spherical and flat surfaces (and their corresponding symmetric partitions). One can of course look at a piece of kale and think “wow, that’s a hyperbolic surface” but what is impossible to do is to see it “as if it were flat”. One can only see hyperbolic surfaces as projections (i.e. where we make regular shapes look irregular so that they can fit on a flat surface) or we end up contorting the surface in a crumbly fashion in order to fit it in our flat experiential space. (Note: even sober phenomenal space happens to be based on projective geometry; but let’s not go there for now.)

4) DMT: Hyperbolizing Phenomenal Space

In psychedelic states it is common to experience whatever one looks at (or, with more stunning effects, whatever one hallucinates in a sensorially-deprived environment such as a flotation tank) as slowly becoming more and more symmetric. Symmetrical patterns are attractors in psychedelia. It’s common for people to describe their acid experiences as “a kaleidoscope of colors and meaning”. We should not be too quick to dismiss these descriptions as purely metaphorical. As you can see from the article Algorithmic Reduction of Psychedelic States as well as PsychonautWiki’s Symmetrical Texture Repetition, LSD and other psychedelics do in fact “symmetrify” the textures you experience!

8697209364_91f2ab133e_h

What gravel might look like on 150 mics of LSD (Source)

As it turns out, this symmetrification process (what we call “lowering the symmetry detection/propagation threshold”) does allow one to experience any of the possible ways of breaking down spherical and flat surfaces in regular ways (in addition to also enabling the experience of any wallpaper group!). Thus the surfaces of the objects one hallucinates on LSD (specially for Closed Eyes Visuals), are usually carpeted with patterns that have either spherical or flat symmetries (e.g. seeing honeycombs, square grids, regular triangulations, etc.; or seeing dodecahedra, cubes, etc.).

wade_symmetry_best_blank_2

17 wallpaper symmetry groups

Only on very high doses of classic psychedelics does one start to experience objects that have hyperbolic curvature. And this is where DMT becomes very relevant. Vaping it is one of the most efficient ways of achieving “unworldly levels of psychedelia”:

On DMT the “symmetry detection threshold” is reduced to such an extent that any surface you look at very quickly gets super-saturated with regular patterns. Since (for reasons we don’t understand) our brain tries to incorporate whatever shape you hallucinate into the scene as part of the scene, the result of seeing too many triangles (or heptagons, or whatever) is that your brain will “push them into the surfaces” and, in effect, turn those surfaces into hyperbolic spaces.HeptagonsIndrasPearls

Yet another part of your brain (or system of consciousness, whatever it turns out to be) recognizes that “wait, this is waaaay too curved somehow, let me try to shape it into something that could actually exist in my universe”. Hence, in practice, if you take between 10 and 20 mg of DMT, the hyperbolic surfaces you see will become bent and contorted (similar to the pictures you find in the article) just so that they can be “embedded” (a term that roughly means “to fit some object into a space without distorting its properties too much”) into your experience of the space around you.

But then there’s a critical point at which this is no longer possible: Even the most contorted embeddings of the hyperbolic surfaces you experience cannot fit any longer in your normal experience of space on doses above 20 mg, so your mind has no other choice but to change the curvature of the 3D space around you! Thus when you go from “very high on DMT” to “super high on DMT” it feels like you are traveling to an entirely new dimension, where the objects you experience do not fit any longer into the normal world of human experience. They exist in H3 (hyperbolic 3D space). And this is in part why it is so extremely difficult to convey the subjective quality of these experiences. One needs to invoke mathematical notions that are unfamiliar to most people; and even then, when they do understand the math, the raw feeling of changing the damn geometry of your experience is still a lot weirder than you could ever anticipate.

hexagon_sphere_plane_hyp_032

Anybody else want to play hyperbolic soccer? Humans vs. Entities, the match of the eon!

Note: The original article goes into more depth

Now that you understand the gist of the original article, I encourage you to take a closer look at it, as it includes content that I didn’t touch in this ELI5 (or 12) summary. It provides a granular description of the 6 levels of DMT experience (Threshold, Chrysanthemum, Magic Eye, Waiting Room, Breakthrough, and Amnesia), many pictures to illustrate the various levels as well as the particular emergent geometries, and a theoretical discussion of the various algorithmic reductions that might explain how the hyperbolization of phenomenal space takes place based on combining a series of simpler effects together.

Political Peacocks

Extract from Geoffrey Miller’s essay “Political peacocks”

The hypothesis

Humans are ideological animals. We show strong motivations and incredible capacities to learn, create, recombine, and disseminate ideas. Despite the evidence that these idea-processing systems are complex biological adaptations that must have evolved through Darwinian selection, even the most ardent modern Darwinians such as Stephen Jay Gould, Richards Dawkins, and Dan Dennett tend to treat culture as an evolutionary arena separate from biology. One reason for this failure of nerve is that it is so difficult to think of any form of natural selection that would favor such extreme, costly, and obsessive ideological behavior. Until the last 40,000 years of human evolution, the pace of technological and social change was so slow that it’s hard to believe there was much of a survival payoff to becoming such an ideological animal. My hypothesis, developed in a long Ph.D. dissertation, several recent papers, and a forthcoming book, is that the payoffs to ideological behavior were largely reproductive. The heritable mental capacities that underpin human language, culture, music, art, and myth-making evolved through sexual selection operating on both men and women, through mutual mate choice. Whatever technological benefits those capacities happen to have produced in recent centuries are unanticipated side-effects of adaptations originally designed for courtship.

[…]

The predictions and implications

The vast majority of people in modern societies have almost no political power, yet have strong political convictions that they broadcast insistently, frequently, and loudly when social conditions are right. This behavior is puzzling to economists, who see clear time and energy costs to ideological behavior, but little political benefit to the individual. My point is that the individual benefits of expressing political ideology are usually not political at all, but social and sexual. As such, political ideology is under strong social and sexual constraints that make little sense to political theorists and policy experts. This simple idea may solve a number of old puzzles in political psychology. Why do hundreds of questionnaires show that men more conservative, more authoritarian, more rights-oriented, and less empathy-oriented than women? Why do people become more conservative as the move from young adulthood to middle age? Why do more men than women run for political office? Why are most ideological revolutions initiated by young single men?

None of these phenomena make sense if political ideology is a rational reflection of political self-interest. In political, economic, and psychological terms, everyone has equally strong self-interests, so everyone should produce equal amounts of ideological behavior, if that behavior functions to advance political self-interest. However, we know from sexual selection theory that not everyone has equally strong reproductive interests. Males have much more to gain from each act of intercourse than females, because, by definition, they invest less in each gamete. Young males should be especially risk-seeking in their reproductive behavior, because they have the most to win and the least to lose from risky courtship behavior (such as becoming a political revolutionary). These predictions are obvious to any sexual selection theorist. Less obvious are the ways in which political ideology is used to advertise different aspects of one’s personality across the lifespan.

In unpublished studies I ran at Stanford University with Felicia Pratto, we found that university students tend to treat each others’ political orientations as proxies for personality traits. Conservatism is simply read off as indicating an ambitious, self-interested personality who will excel at protecting and provisioning his or her mate. Liberalism is read as indicating a caring, empathetic personality who will excel at child care and relationship-building. Given the well-documented, cross-culturally universal sex difference in human mate choice criteria, with men favoring younger, fertile women, and women favoring older, higher-status, richer men, the expression of more liberal ideologies by women and more conservative ideologies by men is not surprising. Men use political conservatism to (unconsciously) advertise their likely social and economic dominance; women use political liberalism to advertise their nurturing abilities. The shift from liberal youth to conservative middle age reflects a mating-relevant increase in social dominance and earnings power, not just a rational shift in one’s self-interest.

More subtley, because mating is a social game in which the attractiveness of a behavior depends on how many other people are already producing that behavior, political ideology evolves under the unstable dynamics of game theory, not as a process of simple optimization given a set of self-interests. This explains why an entire student body at an American university can suddenly act as if they care deeply about the political fate of a country that they virtually ignored the year before. The courtship arena simply shifted, capriciously, from one political issue to another, but once a sufficient number of students decided that attitudes towards apartheid were the acid test for whether one’s heart was in the right place, it became impossible for anyone else to be apathetic about apartheid. This is called frequency-dependent selection in biology, and it is a hallmark of sexual selection processes.

What can policy analysts do, if most people treat political ideas as courtship displays that reveal the proponent’s personality traits, rather than as rational suggestions for improving the world? The pragmatic, not to say cynical, solution is to work with the evolved grain of the human mind by recognizing that people respond to policy ideas first as big-brained, idea-infested, hyper-sexual primates, and only secondly as concerned citizens in a modern polity. This view will not surprise political pollsters, spin doctors, and speech writers, who make their daily living by exploiting our lust for ideology, but it may surprise social scientists who take a more rationalistic view of human nature. Fortunately, sexual selection was not the only force to shape our minds. Other forms of social selection such as kin selection, reciprocal altruism, and even group selection seem to have favoured some instincts for political rationality and consensual egalitarianism. Without the sexual selection, we would never have become such colourful ideological animals. But without the other forms of social selection, we would have little hope of bringing our sexily protean ideologies into congruence with reality.

Beyond Turing: A Solution to the Problem of Other Minds Using Mindmelding and Phenomenal Puzzles

Here is my attempt at providing an experimental protocol to determine whether an entity is conscious.

If you are just looking for the stuffed animal music video skip to 23:28.


Are you the only conscious being in existence? How could we actually test whether other beings have conscious minds?

Turing proposed to test the existence of other minds by measuring their verbal indistinguishability from humans (the famous “Turing Test” asks computers to pretend to be humans and checks if humans buy the impersonations). Others have suggested the solution is as easy as connecting your brain to the brain of the being you want to test.

But these approaches fail for a variety of reasons. Turing tests can be beaten by dream characters and mindmelds might merely work by giving you a “hardware upgrade”. There is no guarantee that the entity tested will be conscious on its own. As pointed out by Brian Tomasik and Eliezer Yudkowsky, even if the information content of your experience increases significantly by mindmelding with another entity, this could still be the result of the entity’s brain working as an exocortex: it is completely unconscious on its own yet capable of enhancing your consciousness.

In order to go beyond these limiting factors, I developed the concept of a “phenomenal puzzle”. These are problems that can only be solved by a conscious being in virtue of requiring inner qualia operations for their solution. For example, a phenomenal puzzle is to arrange qualia values of phenomenal color in a linear map where the metric is based on subjective Just Noticeable Differences.

To conduct the experiment you need:

  1. A phenomenal bridge (e.g. a biological neural network that connects your brain to someone else’s brain so that both brains now instantiate a single consciousness).
  2. A qualia calibrator (a device that allows you to cycle through many combinations of qualia values quickly so that you can compare the sensory-qualia mappings in both brains and generate a shared vocabulary for qualia values).
  3. A phenomenal puzzle (as described above).
  4. The right set and setting: the use of a proper protocol.

Here is an example protocol that works for 4) – though there may be other ones that work as well. Assume that you are person A and you are trying to test if B is conscious:

A) Person A learns about the phenomenal puzzle but is not given enough time to solve it.
B) Person A and B mindmeld using the phenomenal bridge, creating a new being AB.
C) AB tells the phenomenal puzzle to itself (by remembering it from A’s narrative).
D) A and B get disconnected and A is sedated (to prevent A from solving the puzzle).
E) B tries to solve the puzzle on its own (the use of computers not connected to the internet is allowed to facilitate self-experimentation).
F) When B claims to have solved it A and B reconnect into AB.
G) AB then tells the solution to itself so that the records of it in B’s narrative get shared with A’s brain memory.
H) Then A and B get disconnected again and if A is able to provide the answer to the phenomenal puzzle, then B must have been conscious!

To my knowledge, this is the only test of consciousness for which a positive result is impossible (or maybe just extremelly difficult?) to explain unless B is conscious.

Of course B could be conscious but not smart enough to solve the phenomenal puzzle. The test simply guarantees that there will be no false positives. Thus it is not a general test for qualia – but it is a start. At least we can now conceive of a way to know (in principle) whether some entities are conscious (even if we can’t tell that any arbitrary entity is). Still, a positive result would completely negate solipsism, which would undoubtedly be a great philosophical victory.

Core Philosophy

David Pearce asked me ages ago to make accesible videos about transhumanism, consciousness and the abolitionist project. Well, here is a start

In this video I outline the core philosophy and objectives of Qualia Computing. There are three main goals here:

 

  1. Catalogue the entire state-space of consciousness
  2. Identify the computational properties of each experience (and its qualia components), and
  3. Reverse engineer valence (i.e. to discover the function that maps formal descriptions of states of consciousness to values in the pleasure-pain axis)

 

While describing the 1st objective I explain that we start by realizing that consciousness is doing something useful (or evolution would not have been able to recruit it for information-processing purposes). I also go on to explain the difference between qualia varieties (e.g. phenomenal color, smell, touch, thought, etc.) and qualia values (i.e. the specific points in the state-spaces defined by the varieties, such as “pure phenomenal blue” or the smell of cardamom).

 

With regards to the 2nd objective, I explain that our minds actually use the specific properties of each qualia variety in order to represent information states and then to solve computational problems. We are only getting started in this project.

 

And 3rd, I argue that discovering exactly what makes an experience “worth living” in a formal and mathematical way is indeed ethically urgent. With a fundamental understanding of valence we can develop precise interventions to reduce (even prevent altogether) any form of suffering without messing up with our capacity to think and explore the state-space of consciousness (at least the valuable part of it).

 

I conclude by pointing out that the 1st and 2nd research programs actually interact in non-trivial ways: There is a synergy between them which may lead us to a recursively self-improving intelligence (and do so in a far “safer” way than trying to build an AGI through digital software).

David Pearce on the “Schrodinger’s Neurons Conjecture”

My friend +Andrés Gómez Emilsson on Qualia Computing: LSD and Quantum Measurements: Can you see Schrödinger’s cat both dead and alive on acid?

 

Most truly radical intellectual progress depends on “crazy” conjectures. Unfortunately, few folk who make crazy conjectures give serious thought to extracting novel, precise, experimentally falsifiable predictions to confound their critics. Even fewer then publish the almost inevitable negative experimental result when their crazy conjecture isn’t confirmed. So kudos to Andrés for doing both!!

 

What would the world look like if the superposition principle never breaks down, i.e. the unitary Schrödinger dynamics holds on all scales, and not just the microworld? The naïve – and IMO mistaken – answer is that without the “collapse of the wavefunction”, we’d see macroscopic superpositions of live-and-dead cats, experiments would never appear to have determinate outcomes, and the extremely well tested Born rule (i.e. the probability of a result is the squared absolute value of the inner product) would be violated. Or alternatively, assuming DeWitt’s misreading of Everett, if the superposition principle never breaks down, then when you observe a classical live cat, or a classical dead cat, your decohered (“split”) counterpart in a separate classical branch of the multiverse sees a dead cat or a live cat, respectively.

 

In my view, all these stories rest on a false background assumption. Talk of “observers” and “observations” relies on a naïve realist conception of perception whereby you (the “observer”) somehow hop outside of your transcendental skull to inspect the local mind-independent environment (“make an observation”). Such implicit perceptual direct realism simply assumes – rather than derives from quantum field theory – the existence of unified observers (“global” phenomenal binding) and phenomenally-bound classical cats and individually detected electrons striking a mind-independent classical screen cumulatively forming a non-classical interference pattern (“local” phenomenal binding). Perception as so conceived – as your capacity for some sort of out-of-body feat of levitation – isn’t physically possible. The role of the mind-independent environment beyond one’s transcendental skull is to select states of mind internal to your world-simulation; the environment can’t create, or imprint its signature on, your states of mind (“observations”) – any more than the environment can create or imprint its signature on your states of mind while you’re dreaming.

 

Here’s an alternative conjecture – a conjecture that holds regardless of whether you’re drug-naïve, stone-cold sober, having an out-of-body experience on ketamine, awake or dreaming, or tripping your head off on LSD. You’re experiencing “Schrodinger’s cat” states right nowin virtue of instantiating a classical world-simulation. Don’t ask what’s it like to perceive a live-and-dead Schrödinger’s cat; ask instead what it’s like to instantiate a coherent superposition of distributed feature-processing neurons. Only the superposition principle allows you to experience phenomenally-bound classical objects that one naively interprets as lying in the mind-independent world. In my view, the universal validity of the superposition principle allows you to experience a phenomenally bound classical cat within a seemingly classical world-simulation – or perform experiments with classical-looking apparatus that have definite outcomes, and confirm the Born rule. Only the vehicle of individual coherent superpositions of distributed neuronal feature-processors allows organic mind-brains to run world simulations described by an approximation of classical Newtonian physics. In the mind-independent world – i.e. not the world of your everyday experience – the post-Everett decoherence program in QM pioneered by Zeh, Zurek et al. explains the emergence of an approximation of classical “branches” for one’s everyday world-simulations to track. Yet within the CNS, only the superposition principle allows you to run a classical world-simulation tracking such gross fitness-relevant features of your local extracranial environment. A coherent quantum mind can run phenomenally-bound simulations of a classical world, but a notional classical mind couldn’t phenomenally simulate a classical world – or phenomenally simulate any other kind of world. For a supposedly “classical” mind would just be patterns of membrane-bound neuronal mind-dust: mere pixels of experience, a micro-experiential zombie.

 

Critically, molecular matter-wave interferometry can in principle independently be used to test the truth – or falsity – of this conjecture (see: https://www.physicalism.com/#6).

 

OK, that’s the claim. Why would (almost) no scientifically informed person take the conjecture seriously?

 

In a word, decoherence.

 

On a commonsense chronology of consciousness, our experience of phenomenally bound perceptual objects “arises” via patterns of distributed neuronal firings over a timescale of milliseconds – the mystery lying in how mere synchronised firing of discrete, decohered, membrane-bound neurons / micro-experiences could generate phenomenal unity, whether local or global. So if the lifetime of coherent superpositions of distributed neuronal feature-processors in the CNS were milliseconds, too, then there would be an obvious candidate for a perfect structural match between the phenomenology of our conscious minds and neurobiology / fundamental physics – just as I’m proposing above. Yet of course this isn’t the case. The approximate theoretical lifetimes of coherent neuronal superpositions in the CNS can be calculated: femtoseconds or less. Thermally-induced decoherence is insanely powerful and hard to control. It’s ridiculous – intuitively at any rate – to suppose that such fleeting coherent superpositions could be recruited to play any functional role in the living world. An epic fail!

 

Too quick.
Let’s step back.
Many intelligent people initially found it incredible that natural selection could be powerful enough to throw up complex organisms as thermodynamically improbable as Homo sapiens. We now recognise that the sceptics were mistaken: the human mind simply isn’t designed to wrap itself around evolutionary timescales of natural selection playing out over hundreds of millions of years. In the CNS, another form of selection pressure plays out – a selection pressure over one hundred of orders of magnitude (sic) more powerful than selection pressure on information-bearing self-replicators as conceived by Darwin. “Quantum Darwinism” as articulated by Zurek and his colleagues isn’t the shallow, tricksy metaphor one might naively assume; and the profound implications of such a selection mechanism must be explored for the world-simulation running inside your transcendental skull, not just for the extracranial environment. At work here is unimaginably intense selection pressure favouring comparative resistance to thermally (etc)-induced decoherence [i.e. the rapid loss of coherence of complex phase amplitudes of the components of a superposition] of functionally bound phenomenal states of mind in the CNS. In my view, we face a failure of imagination of the potential power of selection pressure analogous to the failure of imagination of critics of Darwin’s account of human evolution via natural selection. It’s not enough lazily to dismiss sub-femtosecond decoherence times of neuronal superpositions in the CNS as the reductio ad absurdum of quantum mind. Instead, we need to do the interferometry experiments definitively to settle the issue, not (just) philosophize.

 

Unfortunately, unlike Andrés, I haven’t been able to think of a DIY desktop experiment that could falsify or vindicate the conjecture. The molecular matter-wave experiment I discuss in “Schrodinger’s Neurons” is conceptually simple but (horrendously) difficult in practice. And the conjecture it tests is intuitively so insane that I’m sometimes skeptical the experiment will ever get done. If I sound like an advocate rather than a bemused truth-seeker, I don’t mean to be so; but if phenomenal binding _isn’t _quantum-theoretically or classically explicable, then dualism seems unavoidable. In that sense, David Chalmers is right.

 

How come I’m so confident that superposition principle doesn’t break down in the CNS? After all, the superposition principle has been tested only up to the level of fullerenes, and no one yet has a proper theory of quantum gravity. Well, besides the classical impossibility of the manifest phenomenal unity of consciousness, and the cogent reasons that a physicist would give you for not modifying the unitary Schrödinger dynamics, the reason is really just a philosophical prejudice on my part. Namely, the universal validity of the superstition principle of QM offers the only explanation-space that I can think of for why anything exists at all: an informationless zero ontology dictated by the quantum analogue of the library of Babel.

 

We shall see.

– David Pearce, commenting on the latest significant article published on this blog.

The Mating Mind

Geoffrey Miller is the author of the “Mating Mind”, a highly interesting book on what evolutionary biology has to say about all of our weird “dating and sexual quirks.” David Pearce highly recommends it, too.

Miller’s talk in this video is just as interesting as Ogi Ogas’ talk about his book “A Billion Wicked Thoughts”. Both talks deal with the evolutionary basis of human sexual desires (yes, even the weird ones… specially the weird ones):

Both use sound empirical methods and develop theories of our sexuality based on genetic, anthropological, and biological analysis of human experience and behavior.

Here is an interesting observation: If we were descendants of a specie that used clones as a way of reproduction (or perhaps formed large asexual social colonies like bees or ants) then we would all love each other unreservedly.

Competition for good genes has made us quasi-psychopathic and selfish. The fall of humanity is not, apparently, the result of sinning against God. But rather, for having evolved in small tribes with heavy in-group genetic biases.

Likewise, our Darwinian origin is responsible for states of low mood, depression, anxiety, and so on. Depression itself, to dive into a specific example, is an adaptive strategy for non-alpha males in the ancestral environment, which predisposes you to keep your head low and reproduce in spite of the presence of an Alpha male who is capable of killing you if you try to challenge him. Additionally, depression is a behavioral response that allows you to passively accept and endure a long-lasting stressor, where “trying to make things right” instead of submitting to the reality of the situation was simply not as genetically adaptive. Of course, since we don’t live in the African Savannah anymore, all of that programming is useless.

Unfortunately, since happiness is itself a sign of status, we are stuck in an awful Moloch scenario: Geoffrey Miller would agree that people are sexually motivated to *pretend that they are happier than they are.*

Forgetting about people with a heavy genetic predisposition to depression who cannot even *conceive of what happiness is*, most people are stuck in recurrent cycles of high, neutral and low moods. And yet, they are anxious to pretend that they are happier than they really are; after all, one’s genes are at stake in this signaling activity.

I have often met highly intelligent people who seem incapable of understanding David Pearce’s Hedonistic Imperative. Although there are many possible causes for this, a very prominent one is the fact that believing that “everyone has a chance to be happy” is itself a happy thought. We run away from depressive worldviews, even if doing so is ethically disastrous.

Let us hope for the best, but plan for the worst.

Yes, we can hope that somehow everyone has a chance to be happy, and sincerely wish that “it really isn’t that bad.” However, let us not act *as if this is true.* We are in a unique position to alleviate and outright exterminate all future suffering in our forward light cone. It would be really sad if we let billions of beings suffer for eons (say, in other galaxies) simply because we entertained too heavily the thought that reality is conspiring in “our” favor (nature, perhaps, is not as kind as it looks when one is in a happy state).

Work Religion

In response to this:

Did this religion of professional work start in the 1950’s? Could have been synergetic with the WWII patriotism. What a vicious cycle, people who love the professional culture, ideology, and rituals who spread the idea that you are personally disadvantaged unless you subscribe to this allegedly prevalent system, so people subscribe and the system becomes prevalent. Even before becoming prevalent, though, people who want the system to become prevalent are confident and mislead people into thinking it is prevalent, and so they join up in a tragedy of the commons. Next time you see someone with an overly clean desk and an excessive zeal for office supplies, formal cloths, ‘professionalism’ sanctioned social interactions (the arbitrary templates where gossip is permitted or even recommended but many instances of non-maliciously motivated honesty, or just acting natural and not so ‘fake’, that you would find between people in any other social situation is blasphemy), who actively spread and enforce their culture, who look confident, posture and all, (even/especially if they’re friendly (less suspicious)) that they know how to correctly act, causing that tragedy to occur, give ’em the ol’ stink eye for me, would ya? Don’t give a stink eye! Haha I’m kidding about seriously desiring their punishment. That’s humor, for ya. But it would be nice if more people were self-conscious about what it is they’re doing, the deceit of their tragedy-inducing confidence, BEFORE they became so deeply emotionally attached to the games, before they so intimately internalize them and, indeed, become drones for these parasitic memetic/cultural systems, a process that begins immediately after infancy. It’s terrible when an old generation has existential distress when their games are ripped out from underneath them, and it’s terrible that these games spread this way, and that people, who wouldn’t otherwise be (maximally) interested in this or that particular game are intimidated into the game, often a severe zero-sum game, where they might be trapped for the rest of their lives in bad faith, toiling away at it and speaking the creeds in defense of it. Whether you oppress with your culture or you are oppressed by a culture, we all aren’t that different after all, because we humans all are oppressed by the phenomenon of culture itself, prevented from reaching way greater potentials (and how might you evaluate the greatness of a potential outside of cultural values? Probably utilitarianism-like, cognitive frameworks which don’t posit the objectively valuable things like heroism, honor, big daddy, etc.., cultures invariably depend on….Anyway, this is a good discussion for (or, rather, pertains to) those who are disturbed by the thought of losing their culture, their forms of prestige, popularity, valor, or whatever social reward objective correlate posits they think they are fundamentally dependent on. People have trouble conceiving themselves attaining happiness or fulfillment or whatever ultimate valuable (which is almost certainly hedonic tone) without their culture. But subtly they misconceive what it would be like for them, their consciousness, to be beyond their current perspective. Even though in words they might appear to be contemplating possibilities beyond their current persuasion, they often fail to, because it is such a fundamental and subtle thing to do, requiring them keep vivid track of so much of their reasoning and schema. When trying to suspend a certain body of frameworks and systems and assumptions, insidiously they creep back in. First of all, most people aren’t even aware they can think beyond them. Their concept of what it is to think and consider is actually limited to thinking within such social paradigms. But if you’re trying to question a system, schema that originate from the very system you’re trying to suspend insidiously pose as necessary, neutral, universal, etc., things beyond the old paradigms when they really aren’t. Little under-cover mental viruses. In my opinion, many representative philosophers and people in academic and analytic philosophy in general fail to work free of this effect. They fail to detect and address their social bias, in very basic ways, for instance positing according to predicted language use (“common sense intuitions” are the worst), not according to evidence for the existence of some entity to be posited.) When will it be that culture doesn’t oppress, that people don’t repress critical or otherwise free thinking, that people don’t have so much socially motivated reasoning (the large majority of one’s thoughts and beliefs are really just predicted social strategies, not true reflection more free of the frames and biases of culture)? When will our interests and our sources of meaning and fulfillment, like the currently socially permissible or even required ones, be replaced by those NOT so bad for us? When will we “be not afraid” of questioning our very fragile personal, interpersonal, moral, and social institutional views, risking never getting them back, so we don’t have to live like this, as cultural sustenance, forced by oppression into forcing by oppression, as conduits of oppression?

 

– Anonymous Source

David Pearce on “Making Sentience Great”

We need measures of intelligence richer than today’s simple-minded autistic IQ/AQ tests. In principle, we could breed super-intelligent humans like strains of smart mice.

 

If I were running the program, I’d use cloning with variations. Start with the DNA of promising candidates, especially Ashkenazi Jews. (John von Neumann, for instance, was buried, not cremated).

 

Using the new tools of CRISPR-based synthetic biology, splice in genes for depression-resistance and perhaps hyper-empathy. Develop and optimize artificial wombs to foster bigger and better embryonic brains; traditional biological pregnancies involve ferocious genetic conflict between mother and embryo, whereas in the future the creation of new life can be geared to the well-being of the unborn child. You can then hothouse the products in an optimally enriched environment. And then clone (with variations) the most promising candidates. No need to wait a whole generation; if a kid wins a Fields Medal aged nine, then clone again with further genetic tweaking. Super-Shulgin academies would have pride of place, together with the EA bioethics department. Spin off a financial services and innovation division so the project becomes self-financing.

 

Recursive genetic self-improvement could in principle be sustained indefinitely, presumably with an increasing degree of “cyborgisation”: not even an unenriched super-von-Neumann could match the serial depth of processing of a digital computer, but with “narrow AI” routinely implanted on web-enabled neurochips, no matter. The demise of aging and rapid growth of genetic self-editing software would presumably make talk of “generations” in the traditional Darwinian sense increasingly archaic

 

Can we foresee any ethical pitfalls? One or two; but if the raison d’être of the project were to promote the well-being of all sentience in our forward light-cone, would you decline the offer of an initial billion-dollar grant?

 

– David Pearce, answering the question “How would you create a super-intelligence?” (source: Facebook)