Every Qualia Computing Article Ever

 The three main goals of Qualia Computing are to:

  1. Catalogue the entire state-space of consciousness
  2. Identify the computational properties of experience, and
  3. Reverse engineer valence (i.e. discover the function that maps formal descriptions of states of consciousness to values along the pleasure-pain axis)

Core Philosophy (2016)

2018

What If God Was a Closed Individualist Presentist Hedonistic Utilitarian With an Information-Theoretic Identity of Indiscernibles Ontology? (quote)

Every Qualia Computing Article Ever

Qualia Computing Attending The Science of Consciousness 2018

Everything in a Nutshell (quote)

2017

Would Maximally Efficient Work Be Fun? (quote)

The Universal Plot: Part I – Consciousness vs. Pure Replicators (long)

No-Self vs. True Self (quote)

Qualia Manifesto (quote)

What Makes Tinnitus, Depression, and the Sound of the Bay Area Rapid Transit (BART) so Awful: Dissonance

Traps of the God Realm (quote)

Avoid Runaway Signaling in Effective Altruism (transcript)

Burning Man (long)

Mental Health as an EA Cause: Key Questions

24 Predictions for the Year 3000 by David Pearce (quote)

Why I think the Foundational Research Institute should rethink its approach (quote/long)

Quantifying Bliss: Talk Summary (long)

Connectome-Specific Harmonic Waves on LSD (transcript)

ELI5 “The Hyperbolic Geometry of DMT Experiences”

Qualia Computing at Consciousness Hacking (June 7th 2017)

Principia Qualia: Part II – Valence(quote)

The Penfield Mood Organ (quote)

The Most Important Philosophical Question

The Forces At Work (quote)

Psychedelic Science 2017: Take-aways, impressions, and what’s next (long)

How Every Fairy Tale Should End

Political Peacocks (quote)

OTC remedies for RLS (quote)

Their Scientific Significance is Hard to Overstate (quote)

Memetic Vaccine Against Interdimensional Aliens Infestation (quote)

Raising the Table Stakes for Successful Theories of Consciousness

Qualia Computing Attending the 2017 Psychedelic Science Conference

GHB vs. MDMA (quote)

Hedonium

2016

The Binding Problem (quote)

The Hyperbolic Geometry of DMT Experiences: Symmetries, Sheets, and Saddled Scenes (long)

Thinking in Numbers (quote)

Praise and Blame are Instrumental (quote)

The Tyranny of the Intentional Object

Schrödinger’s Neurons: David Pearce at the “2016 Science of Consciousness” conference in Tucson

Beyond Turing: A Solution to the Problem of Other Minds Using Mindmelding and Phenomenal Puzzles

Core Philosophy

David Pearce on the “Schrodinger’s Neurons Conjecture” (quote)

Samadhi (quote)

Panpsychism and Compositionality: A solution to the hard problem (quote)

LSD and Quantum Measurements: Can you see Schrödinger’s cat both dead and alive on acid? (long)

Empathetic Super-Intelligence

Wireheading Done Right: Stay Positive Without Going Insane (long)

Just the fate of our forward light-cone

Information-Sensitive Gradients of Bliss (quote)

A Single 3N-Dimensional Universe: Splitting vs. Decoherence (quote)

Algorithmic Reduction of Psychedelic States (long)

So Why Can’t My Boyfriend Communicate? (quote)

The Mating Mind

Psychedelic alignment cascades (quote)

36 Textures of Confusion

Work Religion (quote)

Qualia Computing in Tucson: The Magic Analogy

In Praise of Systematic Empathy

David Pearce on “Making Sentience Great” (quote)

Philosophy of Mind Diagrams

Ontological Runaway Scenario

Peaceful Qualia: The Manhattan Project of Consciousness (long)

Qualia Computing So Far

You are not a zombie (quote)

What’s the matter? It’s Schrödinger, Heisenberg and Dirac’s (quote)

The Biointelligence Explosion (quote)

A (Very) Unexpected Argument Against General Relativity As A Complete Account Of The Cosmos

Status Quo Bias

The Super-Shulgin Academy: A Singularity I Can Believe In (long)

The effect of background assumptions on psychedelic research

2015

An ethically disastrous cognitive dissonance…

Some Definitions (quote)

Who should know about suffering?

Ontological Qualia: The Future of Personal Identity (long)

Google Hedonics

Solutions to World Problems

Why does anything exist? (quote)

State-Space of Background Assumptions

Personal Identity Joke

Getting closer to digital LSD

Psychedelic Perception of Visual Textures 2: Going Meta

On Triviality (quote)

State-Space of Drug Effects: Results

How to secretly communicate with people on LSD

Generalized Wada Test and the Total Order of Consciousness

State-space of drug effects

Psychophysics for Psychedelic Research: Textures (long)

I only vote for politicians who have used psychedelics. EOM.

Why not computing qualia?

David Pearce’s daily morning cocktail (2015) (quote)

Psychedelic Perception of Visual Textures

Should humans wipe out all carnivorous animals so the succeeding generations of herbivores can live in peace? (quote)

A workable solution to the problem of other minds

The fire that breathes reality into the equations of physics (quote)

Phenomenal Binding is incompatible with the Computational Theory of Mind

David Hamilton’s conversation with Alf Bruce about the nature of the mind (quote)

Manifolds of Consciousness: The emerging geometries of iterated local binding

The Real Tree of Life

Phenomenal puzzles – CIELAB

The psychedelic future of consciousness

Not zero-sum

Discussion of Fanaticism (quote)

What does comparatively matter in 2015?

Suffering: Not what your sober mind tells you (quote)

Reconciling memetics and religion.

The Reality of Basement Reality

The future of love

2014

And that’s why we can and cannot have nice things

Breaking the Thought Barrier: Ethics of Brain Computer Interfaces in the workplace

How bad does it get? (quote)

God in Buddhism

Practical metaphysics

Little known fun fact

Crossing borders (quote)

A simple mystical explanation

 


Bolded titles mean that the linked article is foundational: it introduces new concepts, vocabulary, heuristics, research methods, frameworks, and/or thought experiments that are important for the overall project of consciousness research. These tend to be articles that also discuss concepts in much greater depth than other articles.

The “long” tag means that the post has at least 4,000 words. Most of these long articles are in the 6,000 to 10,000 word range. The longest Qualia Computing article is the one about Burning Man which is about 13,500 words long (and also happens to be foundational as it introduces many new frameworks and concepts).

Quotes and transcripts are usually about: evolutionary psychology, philosophy of mind, ethics, neuroscience, physics, meditation, and/or psychedelic phenomenology. By far, David Pearce is the most quoted person on Qualia Computing.


Fast stats:

  • Total number of posts: 114
  • Foundational articles: 25
  • Articles over 4,000 words: 14
  • Original content: 71
  • Quotes and transcripts: 43

 

Everything in a Nutshell

David Pearce at Quora in response to the question: “What are your philosophical positions in one paragraph?“:

“Everyone takes the limits of his own vision for the limits of the world.”
(Schopenhauer)

All that matters is the pleasure-pain axis. Pain and pleasure disclose the world’s inbuilt metric of (dis)value. Our overriding ethical obligation is to minimise suffering. After we have reprogrammed the biosphere to wipe out experience below “hedonic zero”, we should build a “triple S” civilisation based on gradients of superhuman bliss. The nature of ultimate reality baffles me. But intelligent moral agents will need to understand the multiverse if we are to grasp the nature and scope of our wider cosmological responsibilities. My working assumption is non-materialist physicalism. Formally, the world is completely described by the equation(s) of physics, presumably a relativistic analogue of the universal Schrödinger equation. Tentatively, I’m a wavefunction monist who believes we are patterns of qualia in a high-dimensional complex Hilbert space. Experience discloses the intrinsic nature of the physical: the “fire” in the equations. The solutions to the equations of QFT or its generalisation yield the values of qualia. What makes biological minds distinctive, in my view, isn’t subjective experience per se, but rather non-psychotic binding. Phenomenal binding is what consciousness is evolutionarily “for”. Without the superposition principle of QM, our minds wouldn’t be able to simulate fitness-relevant patterns in the local environment. When awake, we are quantum minds running subjectively classical world-simulations. I am an inferential realist about perception. Metaphysically, I explore a zero ontology: the total information content of reality must be zero on pain of a miraculous creation of information ex nihilo. Epistemologically, I incline to a radical scepticism that would be sterile to articulate. Alas, the history of philosophy twinned with the principle of mediocrity suggests I burble as much nonsense as everyone else.

The Universal Plot: Part I – Consciousness vs. Pure Replicators

“It seems plain and self-evident, yet it needs to be said: the isolated knowledge obtained by a group of specialists in a narrow field has in itself no value whatsoever, but only in its synthesis with all the rest of knowledge and only inasmuch as it really contributes in this synthesis toward answering the demand, ‘Who are we?'”

– Erwin Schrödinger in Science and Humanism (1951)

 

“Should you or not commit suicide? This is a good question. Why go on? And you only go on if the game is worth the candle. Now, the universe is been going on for an incredibly long time. Really, a satisfying theory of the universe should be one that’s worth betting on. That seems to me to be absolutely elementary common sense. If you make a theory of the universe which isn’t worth betting on… why bother? Just commit suicide. But if you want to go on playing the game, you’ve got to have an optimal theory for playing the game. Otherwise there’s no point in it.”

Alan Watts, talking about Camu’s claim that suicide is the most important question (cf. The Most Important Philosophical Question)

In this article we provide a novel framework for ethics which focuses on the perennial battle between wellbeing-oriented consciousness-centric values and valueless patterns who happen to be great at making copies of themselves (aka. Consciousness vs. Pure Replicators). This framework extends and generalizes modern accounts of ethics and intuitive wisdom, making intelligible numerous paradigms that previously lived in entirely different worlds (e.g. incongruous aesthetics and cultures). We place this worldview within a novel scale of ethical development with the following levels: (a) The Battle Between Good and Evil, (b) The Balance Between Good and Evil, (c) Gradients of Wisdom, and finally, the view that we advocate: (d) Consciousness vs. Pure Replicators. More so, we analyze each of these worldviews in light of our philosophical background assumptions and posit that (a), (b), and (c) are, at least in spirit, approximations to (d), except that they are less lucid, more confused, and liable to exploitation by pure replicators. Finally, we provide a mathematical formalization of the problem at hand, and discuss the ways in which different theories of consciousness may affect our calculations. We conclude with a few ideas for how to avoid particularly negative scenarios.

Introduction

Throughout human history, the big picture account of the nature, purpose, and limits of reality has evolved dramatically. All religions, ideologies, scientific paradigms, and even aesthetics have background philosophical assumptions that inform their worldviews. One’s answers to the questions “what exists?” and “what is good?” determine the way in which one evaluates the merit of beings, ideas, states of mind, algorithms, and abstract patterns.

Kuhn’s claim that different scientific paradigms are mutually unintelligible (e.g. consciousness realism vs. reductive eliminativism) can be extended to worldviews in a more general sense. It is unlikely that we’ll be able to convey the Consciousness vs. Pure Replicators paradigm by justifying each of the assumptions used to arrive to it one by one starting from current ways of thinking about reality. This is because these background assumptions support each other and are, individually, not derivable from current worldviews. They need to appear together as a unit to hang together tight. Hence, we now make the jump and show you, without further due, all of the background assumptions we need:

  1. Consciousness Realism
  2. Qualia Formalism
  3. Valence Structuralism
  4. The Pleasure Principle (and its corollary The Tyranny of the Intentional Object)
  5. Physicalism (in the causal sense)
  6. Open Individualism (also compatible with Empty Individualism)
  7. Universal Darwinism

These assumptions have been discussed in previous articles. In the meantime, here is a brief description: (1) is the claim that consciousness is an element of reality rather than simply the improper reification of illusory phenomena, such that your conscious experience right now is as much a factual and determinate aspect of reality as, say, the rest mass of an electron. In turn, (2) qualia formalism is the notion that consciousness is in principle quantifiable. Assumption (3) states that valence (i.e. the pleasure/pain axis, how good an experience feels) depends of the structure of such experience (more formally, on the properties of the mathematical object isomorphic to its phenomenology).

(4) is the assumption that people’s behavior is motivated by the pleasure-pain axis even when they think that’s not the case. For instance, people may explicitly represent the reason for doing things in terms of concrete facts about the circumstance, and the pleasure principle does not deny that such reasons are important. Rather, it merely says that such reasons are motivating because one expects/anticipates less negative valence or more positive valence. The Tyranny of the Intentional Object describes the fact that we attribute changes in our valence to external events and objects, and believe that such events and objects are intrinsically good (e.g. we think “icecream is great” rather than “I feel good when I eat icecream”).

Physicalism (5) in this context refers to the notion that the equations of physics fully describe the causal behavior of reality. In other words, the universe behaves according to physical laws and even consciousness has to abide by this fact.

Open Individualism (6) is the claim that we are all one consciousness, in some sense. Even though it sounds crazy at first, there are rigorous philosophical arguments in favor of this view. Whether this is true or not is, for the purpose of this article, less relevant than the fact that we can experience it as true, which happens to have both practical and ethical implications for how society might evolve.

Finally, (7) Universal Darwinism refers to the claim that natural selection works at every level of organization. The explanatory power of evolution and fitness landscapes generated by selection pressures is not confined to the realm of biology. Rather, it is applicable all the way from the quantum foam to, possibly, an ecosystem of universes.

The power of a given worldview is not only its capacity to explain our observations about the inanimate world and the quality of our experience, but also in its capacity to explain *in its own terms* the reasons for why other worldviews are popular as well. In what follows we will utilize these background assumptions to evaluate other worldviews.

 

The Four Worldviews About Ethics

The following four stages describe a plausible progression of thoughts about ethics and the question “what is valuable?” as one learns more about the universe and philosophy. Despite the similarity of the first three levels to the levels of other scales of moral development (e.g. this, this, this, etc.), we believe that the fourth level is novel, understudied, and very, very important.

1. The “Battle Between Good and Evil” Worldview

“Every distinction wants to become the distinction between good and evil.” – Michael Vassar (source)

Common-sensical notions of essential good and evil are pre-scientific. For reasons too complicated to elaborate on for the time being, the human mind is capable of evoking an agentive sense of ultimate goodness (and of ultimate evil).

maxresdefault

Good vs. Evil? God vs. the Devil?

Children are often taught that there are good people and bad people. That evil beings exist objectively, and that it is righteous to punish them and see them with scorn. On this level people reify anti-social behaviors as sins.

Essentializing good and evil, and tying it up to entities seems to be an early developmental stage of people’s conception of ethics, and many people end up perpetually stuck in here. Several religions (specially the Abrahamic ones) are often practiced in such a way so as to reinforce this worldview. That said, many ideologies take advantage of the fact that a large part of the population is at this level to recruit adherents by redefining “what good and bad is” according to the needs of such ideologies. As a psychological attitude (rather than as a theory of the universe), reactionary and fanatical social movements often rely implicitly on this way of seeing the world, where there are bad people (jews, traitors, infidels, over-eaters, etc.) who are seen as corrupting the soul of society and who deserve to have their fundamental badness exposed and exorcised with punishment in front of everyone else.

15d8a1999197da27374e911b1cba769a--satan-god-is-good

Traditional notions of God vs. the Devil can be interpreted as the personification of positive and negative valence

Implicitly, this view tends to gain psychological strength from the background assumptions of Closed Individualism (which allows you to imagine that people can be essentially bad). Likewise, this view tends to be naïve about the importance of valence in ethics. Good feelings are often interpreted as the result of being aligned with fundamental goodness, rather than as positive states of consciousness that happen to be triggered by a mix of innate and programmable things (including cultural identifications). More so, good feelings that don’t come in response to the preconceived universal order are seen as demonic and aberrant.

From our point of view (the 7 background assumptions above) we interpret this particular worldview as something that we might be biologically predisposed to buy into. Believing in the battle between good and evil was probably evolutionarily adaptive in our ancestral environment, and might reduce many frictional costs that arise from having a more subtle view of reality (e.g. “The cheaper people are to model, the larger the groups that can be modeled well enough to cooperate with them.” – Michale Vassar). Thus, there are often pragmatic reasons to adopt this view, specially when the social environment does not have enough resources to sustain a more sophisticated worldview. Additionally, at an individual level, creating strong boundaries around what is or not permissible can be helpful when one has low levels of impulse control (though it may come at the cost of reduced creativity).

On this level, explicit wireheading (whether done right or not) is perceived as either sinful (defying God’s punishment) or as a sort of treason (disengaging from the world). Whether one feels good or not should be left to the whims of the higher order. On the flipside, based on the pleasure principle it is possible to interpret the desire to be righteous as being motivated by high valence states, and reinforced by social approval, all the while the tyranny of the intentional object cloaks this dynamic.

It’s worth noting that cultural conservativism, low levels of the psychological constructs of Openness to Experience and Tolerance of Ambiguity , and high levels of Need for Closure, all predict getting stuck in this worldview for one’s entire life.

2. The “Balance Between Good and Evil” Worldview

TVTropes has a great summary of the sorts of narratives that express this particular worldview and I highly recommend reading that article to gain insight into the moral attitudes compatible with this view. For example, here are some reasons why Good cannot or should not win:

Good winning includes: the universe becoming boring, society stagnating or collapsing from within in the absence of something to struggle against or giving people a chance to show real nobility and virtue by risking their lives to defend each other. Other times, it’s enforced by depicting ultimate good as repressive (often Lawful Stupid), or by declaring concepts such as free will or ambition as evil. In other words “too much of a good thing”.

Balance Between Good and Evil by tvtropes

Now, the stated reasons why people might buy into this view are rarely their true reasons. Deep down, the Balance Between Good and Evil is adopted because: people want to differentiate themselves from those who believe in (1) to signal intellectual sophistication, they experience learned helplessness after trying to defeat evil without success (often in the form of resilient personal failings or societal flaws), they find the view compelling at an intuitive emotional level (i.e. they have internalized the hedonic treadmill and project it onto the rest of reality).

In all of these cases, though, there is something somewhat paradoxical about holding this view. And that is that people report that coming to terms with the fact that not everything can be good is itself a cause of relief, self-acceptance, and happiness. In other words, holding this belief is often mood-enhancing. One can also confirm the fact that this view is emotionally load-bearing by observing the psychological reaction that such people have to, for example, bringing up the Hedonistic Imperative (which asserts that eliminating suffering without sacrificing anything of value is scientifically possible), indefinite life extension, or the prospect of super-intelligence. Rarely are people at this level intellectually curious about these ideas, and they come up with excuses to avoid looking at the evidence, however compelling it may be.

For example, some people are lucky enough to be born with a predisposition to being hyperthymic (which, contrary to preconceptions, does the opposite of making you a couch potato). People’s hedonic set-point is at least partly genetically determined, and simply avoiding some variants of the SCN9A gene with preimplantation genetic diagnosis would greatly reduce the number of people who needlessly suffer from chronic pain.

But this is not seen with curious eyes by people who hold this or the previous worldview. Why? Partly this is because it would be painful to admit that both oneself and others are stuck in a local maxima of wellbeing and that examining alternatives might yield very positive outcomes (i.e. omission bias). But at its core, this willful ignorance can be explained as a consequence of the fact that people at this level get a lot of positive valence from interpreting present and past suffering in such a way that it becomes tied to their core identity. Pride in having overcome their past sufferings, and personal attachment to their current struggles and anxieties binds them to this worldview.

If it wasn’t clear from the previous paragraph, this worldview often requires a special sort of chronic lack of self-insight. It ultimately relies on a psychological trick. One never sees people who hold this view voluntarily breaking their legs, taking poison, or burning their assets to increase the goodness elsewhere as an act of altruism. Instead, one uses this worldview as a mood-booster, and in practice, it is also susceptible to the same sort of fanaticism as the first one (although somewhat less so). “There can be no light without the dark. And so it is with magic. Myself, I always try to live within the light.” – Horace Slughorn.

315eab27545ea96c67953c54358fe600Additionally, this view helps people rationalize the negative aspects of one’s community and culture. For example, it not uncommon for people to say that buying factory farmed meat is acceptable on the grounds that “some things have to die/suffer for others to live/enjoy life.” Balance Between Good and Evil is a close friend of status quo bias.

Hinduism, Daoism, and quite a few interpretations of Buddhism work best within this framework. Getting closer to God and ultimate reality is not done by abolishing evil, but by embracing the unity of all and fostering a healthy balance between health and sickness.

It’s also worth noting that the balance between good and evil tends to be recursively applied, so that one is not able to “re-define our utility function from ‘optimizing the good’ to optimizing ‘the balance of good and evil’ with a hard-headed evidence-based consequentialist approach.” Indeed, trying to do this is then perceived as yet another incarnation of good (or evil) which needs to also be balanced with its opposite (willful ignorance and fuzzy thinking). One comes to the conclusion that it is the fuzzy thinking itself that people at this level are after: to blur reality just enough to make it seem good, and to feel like one is not responsible for the suffering in the world (specially by inaction and lack of thinking clearly about how one could help). “Reality is only a Rorschach ink-blot, you know” – Alan Watts. So this becomes a justification for thinking less than one really has to about the suffering in the world. Then again, it’s hard to blame people for trying to keep the collective standards of rigor lax, given the high proportion of fanatics who adhere to the “battle between good and evil” worldview, and who will jump the gun to demonize anyone who is slacking off and not stressed out all the time, constantly worrying about the question “could I do more?”

(Note: if one is actually trying to improve the world as much as possible, being stressed out about it all the time is not the right policy).

3. The “Gradients of Wisdom” Worldview

David Chapman’s HTML book Meaningness might describe both of the previous worldviews as variants of eternalism. In the context of his work, eternalism refers to the notion that there is an absolute order and meaning to existence. When applied to codes of conduct, this turns into “ethical eternalism”, which he defines as: “the stance that there is a fixed ethical code according to which we should live. The eternal ordering principle is usually seen as the source of the code.” Chapman eloquently argues that eternalism has many side effects, including: deliberate stupidity, attachment to abusive dynamics, constant disappointment and self-punishment, and so on. By realizing that, in some sense, no one knows what the hell is going on (and those who do are just pretending) one takes the first step towards the “Gradients of Wisdom” worldview.

At this level people realize that there is no evil essence. Some might talk about this in terms of there “not being good or bad people”, but rather just degrees of impulse control, knowledge about the world, beliefs about reality, emotional stability, and so on. A villain’s soul is not connected to some kind of evil reality. Rather, his or her actions can be explained by the causes and conditions that led to his or her psychological make-up.

Sam Harris’ ideas as expressed in The Moral Landscape evoke this stage very clearly. Sam explains that just as health is a fuzzy but important concept, so is psychological wellbeing, and that for such a reason we can objectively assess cultures as more or less in agreement with human flourishing. the-science-of-morality-7-728

Indeed, many people who are at this level do believe in valence structuralism, where they recognize that there are states of consciousness that are inherently better in some intrinsic subjective value sense than others.

However, there is usually no principled framework to assess whether a certain future is indeed optimal or not. There is little hard-headed discussion of population ethics for fear of sounding unwise or insensitive. And when push comes to shove, they lack good arguments to decisively rule out why particular situations might be bad. In other words, there is room for improvement, and such improvement might eventually come from more rigor and bullet-bitting.  In particular, a more direct examination of the implications of: Open Individualism, the Tyranny of the Intentional Object, and Universal Darwinism can allow someone on this level to make a breakthrough. Here is where we come to:

4. The “Consciousness vs. Pure Replicators” Worldview

In Wireheading Done Right we introduced the concept of a pure replicator:

I will define a pure replicator, in the context of agents and minds, to be an intelligence that is indifferent towards the valence of its conscious states and those of others. A pure replicator invests all of its energy and resources into surviving and reproducing, even at the cost of continuous suffering to themselves or others. Its main evolutionary advantage is that it does not need to spend any resources making the world a better place.

Presumably our genes are pure replicators. But we, as sentient minds who recognize the intrinsic value (both positive and negative) of conscious experiences, are not pure replicators. Thanks to a myriad of fascinating dynamics, it so happened that making minds who love, appreciate, think creatively, and philosophize was a side effect of the process of refining the selfishness of our genes. We must not take for granted that we are more than pure replicators ourselves, and that we care both about our wellbeing and the wellbeing of others. The problem now is that the particular selection pressures that led to this may not be present in the future. After all, digital and genetic technologies are drastically changing the fitness landscape for patterns that are good at making copies of themselves.

In an optimistic scenario, future selection pressures will make us all naturally gravitate towards super-happiness. This is what David Pearce posits in his essay “The Biointelligence Explosion”:

As the reproductive revolution of “designer babies” gathers pace, prospective parents will pre-select alleles and allelic combinations for a new child in anticipation of their behavioural effects – a novel kind of selection pressure to replace the “blind” genetic roulette of natural selection. In time, routine embryo screening via preimplantation genetic diagnosis will be complemented by gene therapy, genetic enhancement and then true designer zygotes. In consequence, life on Earth will also become progressively happier as the hedonic treadmill is recalibrated. In the new reproductive era, hedonic set-points and intelligence alike will be ratcheted upwards in virtue of selection pressure. For what parent-to-be wants to give birth to a low-status depressive “loser”? Future parents can enjoy raising a normal transhuman supergenius who grows up to be faster than Usain Bolt, more beautiful than Marilyn Monroe, more saintly than Nelson Mandela, more creative than Shakespeare – and smarter than Einstein.

– David Pearce in The Biointelligence Explosion

In a pessimistic scenario, the selection pressures lead to the opposite direction, where negative experiences are the only states of consciousness that happen to be evolutionarily adaptive, and so they become universally used.

There is a number of thinkers and groups who can be squarely placed on this level, and relative to the general population, they are extremely rare (see: The Future of Human Evolution,  A Few Dystopic Future Scenarios,  Book Review: Age of EM, Nick Land’s Gnon, Spreading Happiness to the Stars Seems Little Harder than Just Spreading, etc.). See also**. What is much needed now, is formalizing the situation and working out what we could do about it. But first, some thoughts about the current state of affairs.

There is at least some encouraging facts that suggest it is not too late to prevent a pure replicator takeover. There are memes, states of consciousness, and resources that can be used in order to steer evolution in a positive directions. In particular, as of 2017:

  1. A very big proportion of the economy is dedicated to trading positive experiences for money, rather than just survival or power tools. Thus an economy of information about states of consciousness is still feasible.
  2. There is a large fraction of the population who is altruistic and would be willing to cooperate with the rest of the world to avoid catastrophic scenarios.
  3. Happy people are more motivated, productive, engaged, and ultimately, economically useful (see hyperthimic temperament).
  4. Many people have explored Open Individualism and are interested (or at least curious) about the idea that we are all one.
  5. A lot of people are fascinated by psychedelics and the non-ordinary states of consciousness that they induce.
  6. MDMA-like consciousness is both very positive in terms of its valence, but also, amazingly, extremely pro-social, and future sustainable versions of it could be recruited to stabilize societies where the highest value is the collective wellbeing.

It is important to not underestimate the power of the facts laid out above. If we get our act together and create a Manhattan Project of Consciousness we might be able to find sustainable, reliable, and powerful methods that stabilize a hyper-motivated, smart, super-happy and super-prosocial state of consciousness in a large fraction of the population. In the future, we may all by default identify with consciousness itself rather than with our bodies (or our genes), and be intrinsically (and rationally) motivated to collaborate with everyone else to create as much happiness as possible as well as to eradicate suffering with technology. And if we are smart enough, we might also be able to solidify this state of affairs, or at least shield it against pure replicator takeovers.

The beginnings of that kind of society may already be underway. Consider for example the contrast between Burning Man and Las Vegas. Burning Man is a place that works as a playground for exploring post-Darwinean social dynamics, in which people help each other overcome addictions and affirm their commitment to helping all of humanity. Las Vegas, on the other hand, might be described as a place that is filled to the top with pure replicators in the forms of memes, addictions, and denial. The present world has the potential for both kind of environments, and we do not yet know which one will outlive the other in the long run.

Formalizing the Problem

We want to specify the problem in a way that will make it mathematically intelligible. In brief, in this section we focus on specifying what it means to be a pure replicator in formal terms. Per the definition, we know that pure replicators will use resources as efficiently as possible to make copies of themselves, and will not care about the negative consequences of their actions. And in the context of using brains, computers, and other systems whose states might have moral significance (i.e. they can suffer), they will simply care about the overall utility of such systems for whatever purpose they may require. Such utility will be a function of both the accuracy with which the system performs it’s task, as well as its overall efficiency in terms of resources like time, space, and energy.

Simply phrased, we want to be able to answer the question: Given a certain set of constraints such as energy, matter, and physical conditions (temperature, radiation, etc.), what is the amount of pleasure and pain involved in the most efficient implementation of a given predefined input-output mapping?

system_specifications

The image above represents the relevant components of a system that might be used for some purpose by an intelligence. We have the inputs, the outputs, the constraints (such as temperature, materials, etc.) and the efficiency metrics. Let’s unpack this. In the general case, an intelligence will try to find a system with the appropriate trade-off between efficiency and accuracy. We can wrap up this as an “efficiency metric function”, e(o|i, s, c) which encodes the following meaning: “e(o|i, s, c) = the efficiency with which a given output is generated given the input, the system being used, and the physical constraints in place.”

basic_system

Now, we introduce the notion of the “valence for the system given a particular input” (i.e. the valence for the system’s state in response to such an input). Let’s call this v(s|i). It is worth pointing out that whether valence can be computed, and whether it is even a meaningfully objective property of a system is highly controversial (e.g. “Measuring Happiness and Suffering“). Our particular take (at QRI) is that valence is a mathematical property that can be decoded from the mathematical object whose properties are isomorphic to a system’s phenomenology (see: Principia Qualia: Part II – Valence, and also Quantifying Bliss). If so, then there is a matter of fact about just how good/bad an experience is. For the time being we will assume that valence is indeed quantifiable, given that we are working under the premise of valence structuralism (as stated in our list of assumptions). We thus define the overall utility for a given output as U(e(o|i, s, c), v(s|i)), where the valence of the system may or may not be taken into account. In turn, an intelligence is said to be altruistic if it cares about the valence of the system in addition to its efficiency, so that it’s utility function penalizes negative valence (and rewards positive valence).

valence_altruism

Now, the intelligence (altruistic or not) utilizing the system will also have to take into account the overall range of inputs the system will be used to process in order to determine how valuable the system is overall. For this reason, we define the expected value of the system as the utility of each input multiplied by its probability.

input_probabilities

(Note: a more complete formalization would also weight in the importance of each input-output transformation, in addition to their frequency). Moving on, we can now define the overall expected utility for the system given the distribution of inputs it’s used for, its valence, its efficiency metrics, and its constraints as E[U(s|v, e, c, P(I))]:

chosen_system

The last equation shows that the intelligence would choose the system that maximizes E[U(s|v, e, c, P(I))].

Pure replicators will be better at surviving as long as the chances of reproducing do not depend on their altruism. If altruism does not reduce such reproductive fitness, then:

Given two intelligences that are competing for existence and/or resources to make copies of themselves and fight against other intelligences, there is going to be a strong incentive to choose a system that maximizes the efficiency metrics regardless of the valence of the system.

In the long run, then, we’d expect to see only non-altruistic intelligences (i.e. intelligences with utility functions that are indifferent to the valence of the systems it uses to process information). In other words, as evolution pushes intelligences to optimize the efficiency metrics of the systems they employ, it also pushes them to stop caring about the wellbeing of such systems. In other words, evolution pushes intelligences to become pure replicators in the long run.

Hence we should ask: How can altruism increase the chances of reproduction? A possibility would be for the environment to reward entities that are altruistic. Unfortunately, in the long run we might see that environments that reward altruistic entities produce less efficient entities than environments that don’t. If there are two very similar environments, one which rewards altruism and one which doesn’t, the efficiency of the entities in the latter might become so much higher than in the former that they become able to takeover and destroy whatever mechanism is implementing such reward for altruism in the former. Thus, we suggest to find environments in which rewarding altruism is baked into their very nature, such that similar environments without such reward either don’t exist or are too unstable to exist for the amount of time it takes to evolve non-altruistic entities. This and other similar approaches will be explored further in Part II.

Behaviorism, Functionalism, Non-Materialist Physicalism

A key insight is that the formalization presented above is agnostic about one’s theory of consciousness. We are simply assuming that it’s possible to compute the valence of the system in terms of its state. How one goes about computing such valence, though, will depend on how one maps physical systems to experiences. Getting into the weeds of the countless theories of consciousness out there would not be very productive at this stage, but there is still value in defining the rough outline of kinds of theories of consciousness. In particular, we categorize (physicalist) theories of consciousness in terms of the level of abstraction they identify as the place in which to look for consciousness.

Behaviorism and similar accounts simply associate consciousness to input-output mappings, which can be described, in Marr’s terms, as the computational level of abstraction. In this case, v(s|i) would not depend on the details of the system as much as in what it does from a third person point of view. Behaviorists don’t care what’s in the Chinese Room; all they care about is if the Chinese Room can scribble “I’m in pain” as an output. How we can formalize a mathematical equation to infer whether a system is suffering from a behaviorist point of view is beyond me, but maybe someone might want to give it a shot. As a side note, behaviorists historically were not very concerned about pain or pleasure, and there cause to believe that behaviorism itself might be anti-depressant for people for whom introspection results in more pain than pleasure.

Functionalism (along with computational theories of mind) defines consciousness as the sum-total of the functional properties of systems. In turn, this means that consciousness arises at the algorithmic level of abstraction. Contrary to common misconception, functionalists do care about how the Chinese Room is implemented: contra behaviorists, they do not usually agree that a Chinese Room implemented with a look-up table is conscious.*

As such v(s|i) will depend on the algorithms that the system is implementing. Thus, as an intermediary step, one would need a function that takes the system as an input and returns the algorithms that the system is implementing as an output, A(s). Only once we have A(s) we would then be able to infer the valence of the system. Which algorithms, and for what reason, are in fact hedonically-charged has yet to be clarified. Committed functionalists often associate reinforcement learning with pleasure and pain, and one could imagine that as philosophy of mind gets more rigorous and takes into account more advancements in neuroscience and AI, we will see more hypothesis being made about what kinds of algorithms result in phenomenal pain (and pleasure). There are many (still fuzzy) problems to be solved for this account to work even in principle. Indeed, there is a reason to believe that the question “what algorithms is this system performing?” has no definite answer, and it surely isn’t frame-invariant in the same way that a physical state might be. The fact that algorithms do not carve nature at its joints would imply that consciousness is not really a well-defined element of reality either. But rather than this working as a reductio-ad-absurdum of functionalism, many of its proponents have instead turned around to conclude that consciousness itself is not a natural kind. This does represent an important challenge in order to define the valence of the system, and makes the problem of detecting and avoiding pure replicators extra challenging. Admirably so, this is not stopping some from trying anyway.

We also should note that there are further problems with functionalism in general, including the fact that qualia, the binding problem, and the causal role of consciousness seem underivable from its premises. For a detailed discussion about this, read this article.

Finally, Non-Materialist Physicalism locates consciousness at the implementation level of abstraction. This general account of consciousness refers to the notion that the intrinsic nature of the physical is qualia. There are many related views that for the purpose of this article should be good enough approximations: panpsychism, panexperientialism, neutral monism, Russellian monism, etc. Basically, this view takes seriously both the equations of physics and the idea that what they describe is the behavior of qualia. A big advantage of this view is that there is a matter-of-fact about what a system is composed of. Indeed, both in relativity and quantum mechanics, the underlying nature of a system is frame-invariant, such that its fundamental (intrinsic and causal) properties do not depend on one’s frame of reference. In order to obtain v(s|i) we will need to obtain this frame-invariant description of what the system is in a given state. Thus, we need a function that takes as input physical measurements of the system and returns the best possible approximation to what is actually going on under the hood, Ph(s). And only with this function Ph(s) we would be ready to compute the valence of the system. Now, in practice we might not need a plank-length description of the system, since the mathematical property that describes it’s valence might turn out to be well-approximated with high-level features of it.

The main problem with Non-Materialist Physicalism comes when one considers systems that have similar efficiency metrics, are performing the same algorithms, and look the same in all of the relevant respects from a third-person point, and yet do not have the same experience. In brief: if physical rather than functional aspects of systems map to conscious experiences, it seems likely that we could find two systems that do the same (input-output mapping), do it in the same way (algorithms), and yet one is conscious and the other isn’t.

This kind of scenario is what has pushed many to conclude that functionalism is the only viable alternative, since at this point consciousness would seem epiphenomenal (e.g. Zombies Redacted). And indeed, if this was the case, it would seem to be a mere matter of chance that our brains are implemented with the right stuff to be conscious, since the nature of such stuff is not essential to the algorithms that actually end up processing the information. You cannot speak to stuff, but you can speak to an algorithm. So how do we even know we have the right stuff to be conscious?

The way to respond to this very valid criticism is for Non-Materialist Physicalism to postulate that bound states of consciousness have computational properties. In brief, epiphenomenalism cannot be true. But this does not rule out Non-Materialist Physicalism for the simple reason that the quality of states of consciousness might be involved in processing information. Enter…

The Computational Properties of Consciousness

Let’s leave behaviorism behind for the time being. In what ways do functionalism and non-materialist physicalism differ in the context of information processing? In the former, consciousness is nothing other than certain kinds of information processing, whereas in the latter conscious states can be used for information processing. An example of this falls out of taking David Pearce’s theory of consciousness seriously. In his account, the phenomenal binding problem (i.e. “if we are made of atoms, how come our experience contains many pieces of information at once?”, see: The Combination Problem for Panpsychism) is solved via quantum coherence. Thus, a given moment of consciousness is a definite physical system that works as a unit. Conscious states are ontologically unitary, and not merely functionally unitary.

If this is the case, there would be a good reason for evolution to recruit conscious states to process information. Simply put, given a set of constraints, using quantum coherence might be the most efficient way to solve some computational problems. Thus, evolution might have stumbled upon a computational jackpot by creating neurons whose (extremely) fleeting quantum coherence could be used to solve constraint satisfaction problems in ways that would be more energetically expensive to do otherwise. In turn, over many millions of years, brains got really good at using consciousness in order to efficiently process information. It is thus not an accident that we are conscious, that our conscious experiences are unitary, that our world-simulations use a wide range of qualia varieties, and so on. All of these seemingly random, seemingly epiphenomenal, aspects of our existence happen to be computationally advantageous. Just as using quantum computing for factorizing prime numbers, or for solving problems amenable to annealing might give quantum computers a computational edge over their non-quantum counterparts, so is using bound conscious experiences helpful to outcompete non-sentient animals.

Of course, there is yet no evidence of macroscopic decoherence and the brain is too hot anyway, so on the face of it Pearce’s theory seems exceedingly unlikely. But its explanatory power should not be dismissed out of hand, and the fact that it makes empirically testable predictions is noteworthy (how often do consciousness theorists make precise predictions to falsify their theories?).

Whether it is via quantum coherence, entanglement, invariants of the gauge field, or any other deep physical property of reality, non-materialist physicalism can avert the spectre of epiphenomenalism by postulating that the relevant properties of matter that make us conscious are precisely those that give our brains a computational edge (relative to what evolution was able to find in the vicinity of the fitness landscape explored in our history).

Will Pure Replicators Use Valence Gradients at All?

Whether we work under the assumption of functionalism or non-materialist physicalism, we already know that our genes found happiness and suffering to be evolutionary advantageous. So we know that there is at least a set of constraints, efficiency metrics, and input-output mappings that make both phenomenal pleasure and pain very good algorithms (functionalism) or physical implementations (non-materialist physicalism). But will the parameters necessitated by replicators in the long-term future have these properties? Remember that evolution was only able to explore a restricted state-space of possible brain implementations delimited by the pre-existing gene pool (and the behavioral requirements provided by the environment). So, in one extreme case, it may be the case that a fully optimized brain simply does not need consciousness to solve problems. And in another extreme, it may turn out that consciousness is extraordinarily more powerful when used in an optimal way. Would this be good or bad?

What’s the best case scenario? Well, the absolute best possible case is a case so optimistic and incredibly lucky that if it turned out to be true, it would probably make me believe in a benevolent God (or Simulation). This is the case where it turns out that only positive valence gradients are computationally superior to every other alternative given a set of constraints, input-output mappings, and arbitrary efficiency functions. In this case, the most powerful pure replicators, despite their lack of altruism, will nonetheless be pumping out massive amounts of systems that produce unspeakable levels of bliss. It’s as if the very nature of this universe is blissful… we simply happen to suffer because we are stuck in a tiny wrinkle at the foothills of the optimization process of evolution.

In the extreme opposite case, it turns out that only negative valence gradients offer strict computational benefits under heavy optimization. This would be Hell. Or at least, it would tend towards Hell in the long run. If this happens to be the universe we live in, let’s all agree to either conspire to prevent evolution from moving on, or figure out the way to turn it off. In the long term, we’d expect every being alive (or AI, upload, etc.) to be a zombie or a piece of dolorium. Not a fun idea.

In practice, it’s much more likely that both positive and negative valence gradients will be of some use in some contexts. Figuring out exactly which contexts these are might be both extremely important, and also extremely dangerous. In particular, finding out in advance which computational tasks make positive valence gradients a superior alternative to other methods of doing the relevant computations would inform us about the sorts of cultures, societies, religions, and technologies that we should be promoting in order to give this a push in the right direction (and hopefully out-run the environments that would make negative valence gradients adaptive).

Unless we create a Singleton early on, it’s likely that by default all future entities in the long-term future will be non-altruistic pure replicators. But it is also possible that there are multiple attractors (i.e. evolutionarily stable ecosystems) in which different computational properties of consciousness are adaptive. Thus the case for pushing our evolutionary history in the right direction right now before we give up.

 Coming Next: The Hierarchy of Cooperators

Now that we covered the four worldviews, formalized what it means to be a pure replicator, and analyzed the possible future outcomes based on the computational properties of consciousness (and of valence gradients in particular), we are ready to face the game of reality in its own terms.

Team Consciousness, we need to to get our act together. We need a systematic worldview, availability of states of consciousness, set of beliefs and practices to help us prevent pure replicator takeovers.

But we cannot do this as long as we are in the dark about the sorts of entities, both consciousness-focused and pure replicators, who are likely to arise in the future in response to the selection pressures that cultural and technological change are likely to produce. In Part II of The Universal Plot we will address this and more. Stay tuned…

 



* Rather, they usually claim that, given that a Chinese Room is implemented with physical material from this universe and subject to the typical constraints of this world, it is extremely unlikely that a universe-sized look-up table would be producing the output. Hence, the algorithms that are producing the output are probably highly complex and using information processing with human-like linguistic representations, which means that, by all means, the Chinese Room it very likely understanding what it is outputting.


** Related Work:

Here is a list of literature that points in the direction of Consciousness vs. Pure Replicators. There are countless more worthwhile references, but I think that these ones are about the best:

The Biointelligence Explosion (David Pearce), Meditations on Moloch (Scott Alexander), What is a Singleton? (Nick Bostrom), Coherent Extrapolated Volition (Eliezer Yudkowsky), Simulations of God (John Lilly), Meaningness (David Chapman), The Selfish Gene (Richard Dawkins), Darwin’s Dangerous Idea (Daniel Dennett), Prometheus Rising (R. A. Wilson).

Additionally, here are some further references that address important aspects of this worlview, although they are not explicitly trying to arrive at a big picture view of the whole thing:

Neurons Gone Wild (Kevin Simler), The Age of EM (Robin Hanson), The Mating Mind (Geoffrey Miller), Joyous Cosmology (Alan Watts), The Ego Tunnel (Thomas Metzinger), The Orthogonality Thesis (Stuart Armstrong)

 

24 Predictions for the Year 3000 by David Pearce

In response to the Quora question Looking 1000 years into the future and assuming the human race is doing well, what will society be like?, David Pearce wrote:


The history of futurology to date makes sobering reading. Prophecies tend to reveal more about the emotional and intellectual limitations of the author than the future. […]
But here goes…

Year 3000

1) Superhuman bliss.

Mastery of our reward circuitry promises a future of superhuman bliss – gradients of genetically engineered well-being orders of magnitude richer than today’s “peak experiences”.
Superhappiness?
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3274778/

2) Eternal youth.

More strictly, indefinitely extended youth and effectively unlimited lifespans. Transhumans, humans and their nonhuman animal companions don’t grow old and perish. Automated off-world backups allow restoration and “respawning” in case of catastrophic accidents. “Aging” exists only in the medical archives.
SENS Research Foundation – Wikipedia

3) Full-spectrum superintelligences.

A flourishing ecology of sentient nonbiological quantum computers, hyperintelligent digital zombies and full-spectrum transhuman “cyborgs” has radiated across the Solar System. Neurochipping makes superintelligence all-pervasive. The universe seems inherently friendly: ubiquitous AI underpins the illusion that reality conspires to help us.
Superintelligence: Paths, Dangers, Strategies – Wikipedia
Artificial Intelligence @ MIRI
Kurzweil Accelerating Intelligence
Supersentience

4) Immersive VR.

“Magic” rules. “Augmented reality” of earlier centuries has been largely superseded by hyperreal virtual worlds with laws, dimensions, avatars and narrative structures wildly different from ancestral consensus reality. Selection pressure in the basement makes complete escape into virtual paradises infeasible. For the most part, infrastructure maintenance in basement reality has been delegated to zombie AI.
Augmented reality – Wikipedia
Virtual reality – Wikipedia

5) Transhuman psychedelia / novel state spaces of consciousness.

Analogues of cognition, volition and emotion as conceived by humans have been selectively retained, though with a richer phenomenology than our thin logico-linguistic thought. Other fundamental categories of mind have been discovered via genetic tinkering and pharmacological experiment. Such novel faculties are intelligently harnessed in the transhuman CNS. However, the ordinary waking consciousness of Darwinian life has been replaced by state-spaces of mind physiologically inconceivable to Homo sapiens. Gene-editing tools have opened up modes of consciousness that make the weirdest human DMT trip akin to watching paint dry. These disparate states-spaces of consciousness do share one property: they are generically blissful. “Bad trips” as undergone by human psychonauts are physically impossible because in the year 3000 the molecular signature of experience below “hedonic zero” is missing.
ShulginResearch.org
Qualia Computing

6) Supersentience / ultra-high intensity experience.

The intensity of everyday experience surpasses today’s human imagination. Size doesn’t matter to digital data-processing, but bigger brains with reprogrammed, net-enabled neurons and richer synaptic connectivity can exceed the maximum sentience of small, simple, solipsistic mind-brains shackled by the constraints of the human birth-canal. The theoretical upper limits to phenomenally bound mega-minds, and the ultimate intensity of experience, remain unclear. Intuitively, humans have a dimmer-switch model of consciousness – with e.g. ants and worms subsisting with minimal consciousness and humans at the pinnacle of the Great Chain of Being. Yet Darwinian humans may resemble sleepwalkers compared to our fourth-millennium successors. Today we say we’re “awake”, but mankind doesn’t understand what “posthuman intensity of experience” really means.
What earthly animal comes closest to human levels of sentience?

7) Reversible mind-melding.

Early in the twenty-first century, perhaps the only people who know what it’s like even partially to share a mind are the conjoined Hogan sisters. Tatiana and Krista Hogan share a thalamic bridge. Even mirror-touch synaesthetes can’t literally experience the pains and pleasures of other sentient beings. But in the year 3000, cross-species mind-melding technologies – for instance, sophisticated analogues of reversible thalamic bridges – and digital analogs of telepathy have led to a revolution in both ethics and decision-theoretic rationality.
Could Conjoined Twins Share a Mind?
Mirror-touch synesthesia – Wikipedia
Ecstasy : Utopian Pharmacology

8) The Anti-Speciesist Revolution / worldwide veganism/invitrotarianism.

Factory-farms, slaughterhouses and other Darwinian crimes against sentience have passed into the dustbin of history. Omnipresent AI cares for the vulnerable via “high-tech Jainism”. The Anti-Speciesist Revolution has made arbitrary prejudice against other sentient beings on grounds of species membership as perversely unthinkable as discrimination on grounds of ethnic group. Sentience is valued more than sapience, the prerogative of classical digital zombies (“robots”).
What is High-tech Jainism?
The Antispeciesist Revolution
‘Speciesism: Why It Is Wrong and the Implications of Rejecting It’

9) Programmable biospheres.

Sentient beings help rather than harm each other. The successors of today’s primitive CRISPR genome-editing and synthetic gene drive technologies have reworked the global ecosystem. Darwinian life was nasty, brutish and short. Extreme violence and useless suffering were endemic. In the year 3000, fertility regulation via cross-species immunocontraception has replaced predation, starvation and disease to regulate ecologically sustainable population sizes in utopian “wildlife parks”. The free-living descendants of “charismatic mega-fauna” graze happily with neo-dinosaurs, self-replicating nanobots, and newly minted exotica in surreal garden of edens. Every cubic metre of the biosphere is accessible to benign supervision – “nanny AI” for humble minds who haven’t been neurochipped for superintelligence. Other idyllic biospheres in the Solar System have been programmed from scratch.
CRISPR – Wikipedia
Genetically designing a happy biosphere
Our Biotech Future

10) The formalism of the TOE is known.
(details omitteddoes Quora support LaTeX?)

Dirac recognised the superposition principle as the fundamental principle of quantum mechanics. Wavefunction monists believe the superposition principle holds the key to reality itself. However – barring the epoch-making discovery of a cosmic Rosetta stone – the implications of some of the more interesting solutions of the master equation for subjective experience are still unknown.
Theory of everything – Wikipedia
M-theory – Wikipedia
Why does the universe exist? Why is there something rather than nothing?
Amazon.com: The Wave Function: Essays on the Metaphysics of Quantum Mechanics (9780199790548): Alyssa Ney, David Z Albert: Books

11) The Hard Problem of consciousness is solved.

The Hard Problem of consciousness was long reckoned insoluble. The Standard Model in physics from which (almost) all else springs was a bit of a mess but stunningly empirically successful at sub-Planckian energy regimes. How could physicalism and the ontological unity of science be reconciled with the existence, classically impossible binding, causal-functional efficacy and diverse palette of phenomenal experience? Mankind’s best theory of the world was inconsistent with one’s own existence, a significant shortcoming. However, all classical- and quantum-mind conjectures with predictive power had been empirically falsified by 3000 – with one exception.
Physicalism – Wikipedia
Quantum Darwinism – Wikipedia
Consciousness (Stanford Encyclopedia of Philosophy)
Hard problem of consciousness – Wikipedia
Integrated information theory – Wikipedia
Principia Qualia
Dualism – Wikipedia
New mysterianism – Wikipedia
Quantum mind – Wikipedia

[Which theory is most promising? As with the TOE, you’ll forgive me for skipping the details. In any case, my ideas are probably too idiosyncratic to be of wider interest, but for anyone curious: What is the Quantum Mind?]

12) The Meaning of Life resolved.

Everyday life is charged with a profound sense of meaning and significance. Everyone feels valuable and valued. Contrast the way twenty-first century depressives typically found life empty, absurd or meaningless; and how even “healthy” normals were sometimes racked by existential angst. Or conversely, compare how people with bipolar disorder experienced megalomania and messianic delusions when uncontrollably manic. Hyperthymic civilization in the year 3000 records no such pathologies of mind or deficits in meaning. Genetically preprogrammed gradients of invincible bliss ensure that all sentient beings find life self-intimatingly valuable. Transhumans love themselves, love life, and love each other.
https://www.transhumanism.com/

13) Beautiful new emotions.

Nasty human emotions have been retired – with or without the recruitment of functional analogs to play their former computational role. Novel emotions have been biologically synthesised and their “raw feels” encephalised and integrated into the CNS. All emotion is beautiful. The pleasure axis has replaced the pleasure-pain axis as the engine of civilised life.
An information-theoretic perspective on life in Heaven

14) Effectively unlimited material abundance / molecular nanotechnology.

Status goods long persisted in basement reality, as did relics of the cash nexus on the blockchain. Yet in a world where both computational resources and the substrates of pure bliss aren’t rationed, such ugly evolutionary hangovers first withered, then died.
http://metamodern.com/about-the-author/
Blockchain – Wikipedia

15) Posthuman aesthetics / superhuman beauty.

The molecular signatures of aesthetic experience have been identified, purified and overexpressed. Life is saturated with superhuman beauty. What passed for “Great Art” in the Darwinian era is no more impressive than year 2000 humans might judge, say, a child’s painting by numbers or Paleolithic daubings and early caveporn. Nonetheless, critical discernment is retained. Transhumans are blissful but not “blissed out” – or not all of them at any rate.
Art – Wikipedia
http://www.sciencemag.org/news/2009/05/earliest-pornography

16) Gender transformation.

Like gills or a tail, “gender” in the human sense is a thing of the past. We might call some transhuman minds hyper-masculine (the “ultrahigh AQ” hyper-systematisers), others hyperfeminine (“ultralow AQ” hyper-empathisers), but transhuman cognitive styles transcend such crude dichotomies, and can be shifted almost at will via embedded AI. Many transhumans are asexual, others pan-sexual, a few hypersexual, others just sexually inquisitive. “The degree and kind of a man’s sexuality reach up into the ultimate pinnacle of his spirit”, said Nietzsche – which leads to (17).

Object Sexuality – Wikipedia
Empathizing & Systematizing Theory – Wikipedia
https://www.livescience.com/2094-homosexuality-turned-fruit-flies.html
https://www.wired.com/2001/12/aqtest/

17) Physical superhealth.

In 3000, everyone feels physically and psychologically “better than well”. Darwinian pathologies of the flesh such as fatigue, the “leaden paralysis” of chronic depressives, and bodily malaise of any kind are inconceivable. The (comparatively) benign “low pain” alleles of the SCN9A gene that replaced their nastier ancestral cousins have been superseded by AI-based nociception with optional manual overrides. Multi-sensory bodily “superpowers” are the norm. Everyone loves their body-images in virtual and basement reality alike. Morphological freedom is effectively unbounded. Awesome robolovers, nights of superhuman sensual passion, 48-hour whole-body orgasms, and sexual practices that might raise eyebrows among prudish Darwinians have multiplied. Yet life isn’t a perpetual orgy. Academic subcultures pursue analogues of Mill’s “higher pleasures”. Paradise engineering has become a rigorous discipline. That said, a lot of transhumans are hedonists who essentially want to have superhuman fun. And why not?
https://www.wired.com/2017/04/the-cure-for-pain/
http://io9.gizmodo.com/5946914/should-we-eliminate-the-human-ability-to-feel-pain
http://www.bbc.com/future/story/20140321-orgasms-at-the-push-of-a-button

18) World government.

Routine policy decisions in basement reality have been offloaded to ultra-intelligent zombie AI. The quasi-psychopathic relationships of Darwinian life – not least the zero-sum primate status-games of the African savannah – are ancient history. Some conflict-resolution procedures previously off-loaded to AI have been superseded by diplomatic “mind-melds”. In the words of Henry Wadsworth Longfellow, “If we could read the secret history of our enemies, we should find in each man’s life sorrow and suffering enough to disarm all hostility.” Our descendants have windows into each other’s souls, so to speak.

19) Historical amnesia.

The world’s last experience below “hedonic zero” marked a major evolutionary transition in the evolutionary development of life. In 3000, the nature of sub-zero states below Sidgwick’s “natural watershed” isn’t understood except by analogy: some kind of phase transition in consciousness below life’s lowest hedonic floor – a hedonic floor that is being genetically ratcheted upwards as life becomes ever more wonderful. Transhumans are hyper-empathetic. They get off on each other’s joys. Yet paradoxically, transhuman mental superhealth depends on biological immunity to true comprehension of the nasty stuff elsewhere in the universal wavefunction that even mature superintelligence is impotent to change. Maybe the nature of e.g. Darwinian life, and the minds of malaise-ridden primitives in inaccessible Everett branches, doesn’t seem any more interesting than we find books on the Dark Ages. Negative utilitarianism, if it were conceivable, might be viewed as a depressive psychosis. “Life is suffering”, said Gautama Buddha, but fourth millennials feel in the roots of their being that Life is bliss.
Invincible ignorance? Perhaps.
Negative Utilitarianism – Wikipedia

20) Super-spirituality.

A tough one to predict. But neuroscience can soon identify the molecular signatures of spiritual experience, refine them, and massively amplify their molecular substrates. Perhaps some fourth millennials enjoy lifelong spiritual ecstasies beyond the mystical epiphanies of temporal-lobe epileptics. Secular rationalists don’t know what we’re missing.
https://www.newscientist.com/article/mg22129531-000-ecstatic-epilepsy-how-seizures-can-be-bliss/

21) The Reproductive Revolution.
Reproduction is uncommon in a post-aging society. Most transhumans originate as extra-uterine “designer babies”. The reckless genetic experimentation of sexual reproduction had long seemed irresponsible. Old habits still died hard. By year 3000, the genetic crapshoot of Darwinian life has finally been replaced by precision-engineered sentience. Early critics of “eugenics” and a “Brave New World” have discovered by experience that a “triple S” civilisation of superhappiness, superlongevity and superintelligence isn’t as bad as they supposed.
https://www.reproductive-revolution.com/
https://www.huxley.net/

22) Globish (“English Plus”).

Automated real-time translation has been superseded by a common tongue – Globish – spoken, written or “telepathically” communicated. Partial translation manuals for mutually alien state-spaces of consciousness exist, but – as twentieth century Kuhnians would have put it – such state-spaces tend to be incommensurable and their concepts state-specific. Compare how poorly lucid dreamers can communicate with “awake” humans. Many Darwinian terms and concepts are effectively obsolete. In their place, active transhumanist vocabularies of millions of words are common. “Basic Globish” is used for communication with humble minds, i.e. human and nonhuman animals who haven’t been fully uplifted.
Incommensurability – SEoP
Uplift (science_fiction) – Wikipedia

23) Plans for Galactic colonization.

Terraforming and 3D-bioprinting of post-Darwinian life on nearby solar systems is proceeding apace. Vacant ecological niches tend to get filled. In earlier centuries, a synthesis of cryonics, crude reward pathway enhancements and immersive VR software, combined with revolutionary breakthroughs in rocket propulsion, led to the launch of primitive manned starships. Several are still starbound. Some transhuman utilitarian ethicists and policy-makers favour creating a utilitronium shockwave beyond the pale of civilisation to convert matter and energy into pure pleasure. Year 3000 bioconservatives focus on promoting life animated by gradients of superintelligent bliss. Yet no one objects to pure “hedonium” replacing unprogrammed matter.
Interstellar Travel – Wikipedia
Utilitarianism – Wikipedia

24) The momentous “unknown unknown”.

If you read a text and the author’s last words are “and then I woke up”, everything you’ve read must be interpreted in a new light – semantic holism with a vengeance. By the year 3000, some earth-shattering revelation may have changed everything – some fundamental background assumption of earlier centuries has been overturned that might not have been explicitly represented in our conceptual scheme. If it exists, then I’ve no inkling what this “unknown unknown” might be, unless it lies hidden in the untapped subjective properties of matter and energy. Christian readers might interject “The Second Coming”. Learning the Simulation Hypothesis were true would be a secular example of such a revelation. Some believers in an AI “Intelligence Explosion” speak delphically of “The Singularity”. Whatever – Shakespeare made the point more poetically, “There are more things in heaven and earth, Horatio, Than are dreamt of in your philosophy”.

As it stands, yes, (24) is almost vacuous. Yet compare how the philosophers of classical antiquity who came closest to recognising their predicament weren’t intellectual titans like Plato or Aristotle, but instead the radical sceptics. The sceptics guessed they were ignorant in ways that transcended the capacity of their conceptual scheme to articulate. By the lights of the fourth millennium, what I’m writing, and what you’re reading, may be stultified by something that humans don’t know and can’t express.
Ancient Skepticism – SEoP

**********************************************************************

OK, twenty-four predictions! Successful prophets tend to locate salvation or doom within the credible lifetime of their intended audience. The questioner asks about life in the year 3000 rather than, say, a Kurzweilian 2045. In my view, everyone reading this text will grow old and die before the predictions of this answer are realised or confounded – with one possible complication.

Opt-out cryonics and opt-in cryothanasia are feasible long before the conquest of aging. Visiting grandpa in the cryonics facility can turn death into an event in life. I’m not convinced that posthuman superintelligence will reckon that Darwinian malware should be revived in any shape or form. Yet if you want to wake up one morning in posthuman paradise – and I do see the appeal – then options exist:
http://www.alcor.org/

********************************************************************
p.s. I’m curious about the credence (if any) the reader would assign to the scenarios listed here.

Why I think the Foundational Research Institute should rethink its approach

by Mike Johnson

The following is my considered evaluation of the Foundational Research Institute, circa July 2017. I discuss its goal, where I foresee things going wrong with how it defines suffering, and what it could do to avoid these problems.

TL;DR version: functionalism (“consciousness is the sum-total of the functional properties of our brains”) sounds a lot better than it actually turns out to be in practice. In particular, functionalism makes it impossible to define ethics & suffering in a way that can mediate disagreements.

I. What is the Foundational Research Institute?

The Foundational Research Institute (FRI) is a Berlin-based group that “conducts research on how to best reduce the suffering of sentient beings in the near and far future.” Executive Director Max Daniel introduced them at EA Global Boston as “the only EA organization which at an organizational level has the mission of focusing on reducing s-risk.” S-risks are, according to Daniel, “risks where an adverse outcome would bring about suffering on an astronomical scale, vastly exceeding all suffering that has existed on Earth so far.”

Essentially, FRI wants to become the research arm of suffering-focused ethics, and help prevent artificial general intelligence (AGI) failure-modes which might produce suffering on a cosmic scale.

What I like about FRI:

While I have serious qualms about FRI’s research framework, I think the people behind FRI deserve a lot of credit- they seem to be serious people, working hard to build something good. In particular, I want to give them a shoutout for three things:

  • First, FRI takes suffering seriously, and I think that’s important. When times are good, we tend to forget how tongue-chewingly horrific suffering can be. S-risks seem particularly horrifying.
  • Second, FRI isn’t afraid of being weird. FRI has been working on s-risk research for a few years now, and if people are starting to come around to the idea that s-risks are worth thinking about, much of the credit goes to FRI.
  • Third, I have great personal respect for Brian Tomasik, one of FRI’s co-founders. I’ve found him highly thoughtful, generous in debates, and unfailingly principled. In particular, he’s always willing to bite the bullet and work ideas out to their logical end, even if it involves repugnant conclusions.

What is FRI’s research framework?

FRI believes in analytic functionalism, or what David Chalmers calls “Type-A materialism”. Essentially, what this means is there’s no ’theoretical essence’ to consciousness; rather, consciousness is the sum-total of the functional properties of our brains. Since ‘functional properties’ are rather vague, this means consciousness itself is rather vague, in the same way words like “life,” “justice,” and “virtue” are messy and vague.

Brian suggests that this vagueness means there’s an inherently subjective, perhaps arbitrary element to how we define consciousness:

Analytic functionalism looks for functional processes in the brain that roughly capture what we mean by words like “awareness”, “happy”, etc., in a similar way as a biologist may look for precise properties of replicators that roughly capture what we mean by “life”. Just as there can be room for fuzziness about where exactly to draw the boundaries around “life”, different analytic functionalists may have different opinions about where to define the boundaries of “consciousness” and other mental states. This is why consciousness is “up to us to define”. There’s no hard problem of consciousness for the same reason there’s no hard problem of life: consciousness is just a high-level word that we use to refer to lots of detailed processes, and it doesn’t mean anything in addition to those processes.

Finally, Brian argues that the phenomenology of consciousness is identical with the phenomenology of computation:

I know that I’m conscious. I also know, from neuroscience combined with Occam’s razor, that my consciousness consists only of material operations in my brain — probably mostly patterns of neuronal firing that help process inputs, compute intermediate ideas, and produce behavioral outputs. Thus, I can see that consciousness is just the first-person view of certain kinds of computations — as Eliezer Yudkowsky puts it, “How An Algorithm Feels From Inside“. Consciousness is not something separate from or epiphenomenal to these computations. It is these computations, just from their own perspective of trying to think about themselves.

 

In other words, consciousness is what minds compute. Consciousness is the collection of input operations, intermediate processing, and output behaviors that an entity performs.

And if consciousness is all these things, so too is suffering. Which means suffering is computational, yet also inherently fuzzy, and at least a bit arbitrary; a leaky high-level reification impossible to speak about accurately, since there’s no formal, objective “ground truth”.

II. Why do I worry about FRI’s research framework?

In short, I think FRI has a worthy goal and good people, but its metaphysics actively prevent making progress toward that goal. The following describes why I think that, drawing heavily on Brian’s writings (of FRI’s researchers, Brian seems the most focused on metaphysics):

Note: FRI is not the only EA organization which holds functionalist views on consciousness; much of the following critique would also apply to e.g. MIRI, FHI, and OpenPhil. I focus on FRI because (1) Brian’s writings on consciousness & functionalism have been hugely influential in the community, and are clear enough *to* criticize; (2) the fact that FRI is particularly clear about what it cares about- suffering- allows a particularly clear critique about what problems it will run into with functionalism; (3) I believe FRI is at the forefront of an important cause area which has not crystallized yet, and I think it’s critically important to get these objections bouncing around this subcommunity.

Objection 1: Motte-and-bailey

Brian: “Consciousness is not a thing which exists ‘out there’ or even a separate property of matter; it’s a definitional category into which we classify minds. ‘Is this digital mind really conscious?’ is analogous to ‘Is a rock that people use to eat on really a table?’ [However,] That consciousness is a cluster in thingspace rather than a concrete property of the world does not make reducing suffering less important.”

The FRI model seems to imply that suffering is ineffable enough such that we can’t have an objective definition, yet sufficiently effable that we can coherently talk and care about it. This attempt to have it both ways seems contradictory, or at least in deep tension.

Indeed, I’d argue that the degree to which you can care about something is proportional to the degree to which you can define it objectively. E.g., If I say that “gnireffus” is literally the most terrible thing in the cosmos, that we should spread gnireffus-focused ethics, and that minimizing g-risks (far-future scenarios which involve large amounts of gnireffus) is a moral imperative, but also that what is and what and isn’t gnireffus is rather subjective with no privileged definition, and it’s impossible to objectively tell if a physical system exhibits gnireffus, you might raise any number of objections. This is not an exact metaphor for FRI’s position, but I worry that FRI’s work leans on the intuition that suffering is real and we can speak coherently about it, to a degree greater than its metaphysics formally allow.

Max Daniel (personal communication) suggests that we’re comfortable with a degree of ineffability in other contexts; “Brian claims that the concept of suffering shares the allegedly problematic properties with the concept of a table. But it seems a stretch to say that the alleged tension is problematic when talking about tables. So why would it be problematic when talking about suffering?” However, if we take the anti-realist view that suffering is ‘merely’ a node in the network of language, we have to live with the consequences of this: that ‘suffering’ will lose meaning as we take it away from the network in which it’s embedded (Wittgenstein). But FRI wants to do exactly this, to speak about suffering in the context of AGIs, simulated brains, even video game characters.

We can be anti-realists about suffering (suffering-is-a-node-in-the-network-of-language), or we can argue that we can talk coherently about suffering in novel contexts (AGIs, mind crime, aliens, and so on), but it seems inherently troublesome to claim we can do both at the same time.

Objection 2: Intuition duels

Two people can agree on FRI’s position that there is no objective fact of the matter about what suffering is (no privileged definition), but this also means they have no way of coming to any consensus on the object-level question of whether something can suffer. This isn’t just an academic point: Brian has written extensively about how he believes non-human animals can and do suffer extensively, whereas Yudkowsky (who holds computationalist views, like Brian) has written about how he’s confident that animals are not conscious and cannot suffer, due to their lack of higher-order reasoning.

And if functionalism is having trouble adjudicating the easy cases of suffering–whether monkeys can suffer, or whether dogs can— it doesn’t have a sliver of a chance at dealing with the upcoming hard cases of suffering: whether a given AGI is suffering, or engaging in mind crime; whether a whole-brain emulation (WBE) or synthetic organism or emergent intelligence that doesn’t have the capacity to tell us how it feels (or that we don’t have the capacity to understand) is suffering; if any aliens that we meet in the future can suffer; whether changing the internal architecture of our qualia reports means we’re also changing our qualia; and so on.

In short, FRI’s theory of consciousness isn’t actually a theory of consciousness at all, since it doesn’t do the thing we need a theory of consciousness to do: adjudicate disagreements in a principled way. Instead, it gives up any claim on the sorts of objective facts which could in principle adjudicate disagreements.

This is a source of friction in EA today, but it’s mitigated by the sense that

(1) The EA pie is growing, so it’s better to ignore disagreements than pick fights;

(2) Disagreements over the definition of suffering don’t really matter yet, since we haven’t gotten into the business of making morally-relevant synthetic beings (that we know of) that might be unable to vocalize their suffering.

If the perception of one or both of these conditions change, the lack of some disagreement-adjudicating theory of suffering will matter quite a lot.

Objection 3: Convergence requires common truth

Mike: “[W]hat makes one definition of consciousness better than another? How should we evaluate them?”

Brian: “Consilience among our feelings of empathy, principles of non-discrimination, understandings of cognitive science, etc. It’s similar to the question of what makes one definition of justice or virtue better than another.”

Brian is hoping that affective neuroscience will slowly converge to accurate views on suffering as more and better data about sentience and pain accumulates. But convergence to truth implies something (objective) driving the convergence- in this way, Brian’s framework still seems to require an objective truth of the matter, even though he disclaims most of the benefits of assuming this.

Objection 4: Assuming that consciousness is a reification produces more confusion, not less

Brian: “Consciousness is not a reified thing; it’s not a physical property of the universe that just exists intrinsically. Rather, instances of consciousness are algorithms that are implemented in specific steps. … Consciousness involves specific things that brains do.”

Brian argues that we treat conscious/phenomenology as more ‘real’ than it is. Traditionally, whenever we’ve discovered something is a leaky reification and shouldn’t be treated as ‘too real’, we’ve been able to break it down into more coherent constituent pieces we can treat as real. Life, for instance, wasn’t due to élan vital but a bundle of self-organizing properties & dynamics which generally co-occur. But carrying out this “de-reification” process on consciousness– enumerating its coherent constituent pieces– has proven difficult, especially if we want to preserve some way to speak cogently about suffering.

Speaking for myself, the more I stared into the depths of functionalism, the less certain everything about moral value became– and arguably, I see the same trajectory in Brian’s work and Luke Muehlhauser’s report. Their model uncertainty has seemingly become larger as they’ve looked into techniques for how to “de-reify” consciousness while preserving some flavor of moral value, not smaller. Brian and Luke seem to interpret this as evidence that moral value is intractably complicated, but this is also consistent with consciousness not being a reification, and instead being a real thing. Trying to “de-reify” something that’s not a reification will produce deep confusion, just as surely trying to treat a reification as ‘more real’ than it actually is will.

Edsger W. Dijkstra famously noted that “The purpose of abstraction is not to be vague, but to create a new semantic level in which one can be absolutely precise.” And so if our ways of talking about moral value fail to ‘carve reality at the joints’- then by all means let’s build better ones, rather than giving up on precision.

Objection 5: The Hard Problem of Consciousness is a red herring

Brian spends a lot of time discussing Chalmers’ “Hard Problem of Consciousness”, i.e. the question of why we’re subjectively conscious, and seems to base at least part of his conclusion on not finding this question compelling— he suggests “There’s no hard problem of consciousness for the same reason there’s no hard problem of life: consciousness is just a high-level word that we use to refer to lots of detailed processes, and it doesn’t mean anything in addition to those processes.” I.e., no ‘why’ is necessary; when we take consciousness and subtract out the details of the brain, we’re left with an empty set.

But I think the “Hard Problem” isn’t helpful as a contrastive centerpiece, since it’s unclear what the problem is, and whether it’s analytic or empirical, a statement about cognition or about physics. At the Qualia Research Institute (QRI), we don’t talk much about the Hard Problem; instead, we talk about Qualia Formalism, or the idea that any phenomenological state can be crisply and precisely represented by some mathematical object. I suspect this would be a better foil for Brian’s work than the Hard Problem.

Objection 6: Mapping to reality

Brian argues that consciousness should be defined at the functional/computational level: given a Turing machine, or neural network, the right ‘code’ will produce consciousness. But the problem is that this doesn’t lead to a theory which can ‘compile’ to physics. Consider the following:

Imagine you have a bag of popcorn. Now shake it. There will exist a certain ad-hoc interpretation of bag-of-popcorn-as-computational-system where you just simulated someone getting tortured, and other interpretations that don’t imply that. Did you torture anyone? If you’re a computationalist, no clear answer exists- you both did, and did not, torture someone. This sounds like a ridiculous edge-case that would never come up in real life, but in reality it comes up all the time, since there is no principled way to *objectively derive* what computation(s) any physical system is performing.

I don’t think this is an outlandish view of functionalism; Brian suggests much the same in How to Interpret a Physical System as a Mind“Physicalist views that directly map from physics to moral value are relatively simple to understand. Functionalism is more complex, because it maps from physics to computations to moral value. Moreover, while physics is real and objective, computations are fictional and ‘observer-relative’ (to use John Searle’s terminology). There’s no objective meaning to ‘the computation that this physical system is implementing’ (unless you’re referring to the specific equations of physics that the system is playing out).”

Gordon McCabe (McCabe 2004) provides a more formal argument to this effect— that precisely mapping between physical processes and (Turing-level) computational processes is inherently impossible— in the context of simulations. First, McCabe notes that:

[T]here is a one-[to-]many correspondence between the logical states [of a computer] and the exact electronic states of computer memory. Although there are bijective mappings between numbers and the logical states of computer memory, there are no bijective mappings between numbers and the exact electronic states of memory.

This lack of an exact bijective mapping means that subjective interpretation necessarily creeps in, and so a computational simulation of a physical system can’t be ‘about’ that system in any rigorous way:

In a computer simulation, the values of the physical quantities possessed by the simulated system are represented by the combined states of multiple bits in computer memory. However, the combined states of multiple bits in computer memory only represent numbers because they are deemed to do so under a numeric interpretation. There are many different interpretations of the combined states of multiple bits in computer memory. If the numbers represented by a digital computer are interpretation-dependent, they cannot be objective physical properties. Hence, there can be no objective relationship between the changing pattern of multiple bit-states in computer memory, and the changing pattern of quantity-values of a simulated physical system.

McCabe concludes that, metaphysically speaking,

A digital computer simulation of a physical system cannot exist as, (does not possess the properties and relationships of), anything else other than a physical process occurring upon the components of a computer. In the contemporary case of an electronic digital computer, a simulation cannot exist as anything else other than an electronic physical process occurring upon the components and circuitry of a computer.

Where does this leave ethics? In Flavors of Computation Are Flavors of Consciousness, Brian notes that “In some sense all I’ve proposed here is to think of different flavors of computation as being various flavors of consciousness. But this still leaves the question: Which flavors of computation matter most? Clearly whatever computations happen when a person is in pain are vastly more important than what’s happening in a brain on a lazy afternoon. How can we capture that difference?”

But if Brian grants the former point- that “There’s no objective meaning to ‘the computation that this physical system is implementing’”– then this latter task of figuring out “which flavors of computation matter most” is provably impossible. There will always be multiple computational (and thus ethical) interpretations of a physical system, with no way to figure out what’s “really” happening. No way to figure out if something is suffering or not. No consilience; not now, not ever.

Note: despite apparently granting the point above, Brian also remarks that:

I should add a note on terminology: All computations occur within physics, so any computation is a physical process. Conversely, any physical process proceeds from input conditions to output conditions in a regular manner and so is a computation. Hence, the set of computations equals the set of physical processes, and where I say “computations” in this piece, one could just as well substitute “physical processes” instead.

This seems to be (1) incorrect, for the reasons I give above, or (2) taking substantial poetic license with these terms, or (3) referring to hypercomputation (which might be able to salvage the metaphor, but would invalidate many of FRI’s conclusions dealing with the computability of suffering on conventional hardware).

This objection may seem esoteric or pedantic, but I think it’s important, and that it ripples through FRI’s theoretical framework with disastrous effects.

 

Objection 7: FRI doesn’t fully bite the bullet on computationalism

Brian suggests that “flavors of computation are flavors of consciousness” and that some computations ‘code’ for suffering. But if we do in fact bite the bullet on this metaphor and place suffering within the realm of computational theory, we need to think in “near mode” and accept all the paradoxes that brings. Scott Aaronson, a noted expert on quantum computing, raises the following objections to functionalism:

I’m guessing that many people in this room side with Dennett, and (not coincidentally, I’d say) also with Everett. I certainly have sympathies in that direction too. In fact, I spent seven or eight years of my life as a Dennett/Everett hardcore believer. But, while I don’t want to talk anyone out of the Dennett/Everett view, I’d like to take you on a tour of what I see as some of the extremely interesting questions that that view leaves unanswered. I’m not talking about “deep questions of meaning,” but about something much more straightforward: what exactly does a computational process have to do to qualify as “conscious”?

 

 

There’s this old chestnut, what if each person on earth simulated one neuron of your brain, by passing pieces of paper around. It took them several years just to simulate a single second of your thought processes. Would that bring your subjectivity into being? Would you accept it as a replacement for your current body? If so, then what if your brain were simulated, not neuron-by-neuron, but by a gigantic lookup table? That is, what if there were a huge database, much larger than the observable universe (but let’s not worry about that), that hardwired what your brain’s response was to every sequence of stimuli that your sense-organs could possibly receive. Would that bring about your consciousness? Let’s keep pushing: if it would, would it make a difference if anyone actually consulted the lookup table? Why can’t it bring about your consciousness just by sitting there doing nothing?

To these standard thought experiments, we can add more. Let’s suppose that, purely for error-correction purposes, the computer that’s simulating your brain runs the code three times, and takes the majority vote of the outcomes. Would that bring three “copies” of your consciousness into being? Does it make a difference if the three copies are widely separated in space or time—say, on different planets, or in different centuries? Is it possible that the massive redundancy taking place in your brain right now is bringing multiple copies of you into being?

 

 

Maybe my favorite thought experiment along these lines was invented by my former student Andy Drucker.  In the past five years, there’s been a revolution in theoretical cryptography, around something called Fully Homomorphic Encryption (FHE), which was first discovered by Craig Gentry.  What FHE lets you do is to perform arbitrary computations on encrypted data, without ever decrypting the data at any point.  So, to someone with the decryption key, you could be proving theorems, simulating planetary motions, etc.  But to someone without the key, it looks for all the world like you’re just shuffling random strings and producing other random strings as output.

 

You can probably see where this is going.  What if we homomorphically encrypted a simulation of your brain?  And what if we hid the only copy of the decryption key, let’s say in another galaxy?  Would this computation—which looks to anyone in our galaxy like a reshuffling of gobbledygook—be silently producing your consciousness?

 

When we consider the possibility of a conscious quantum computer, in some sense we inherit all the previous puzzles about conscious classical computers, but then also add a few new ones.  So, let’s say I run a quantum subroutine that simulates your brain, by applying some unitary transformation U.  But then, of course, I want to “uncompute” to get rid of garbage (and thereby enable interference between different branches), so I apply U-1.  Question: when I apply U-1, does your simulated brain experience the same thoughts and feelings a second time?  Is the second experience “the same as” the first, or does it differ somehow, by virtue of being reversed in time? Or, since U-1U is just a convoluted implementation of the identity function, are there no experiences at all here?

 

Here’s a better one: many of you have heard of the Vaidman bomb.  This is a famous thought experiment in quantum mechanics where there’s a package, and we’d like to “query” it to find out whether it contains a bomb—but if we query it and there is a bomb, it will explode, killing everyone in the room.  What’s the solution?  Well, suppose we could go into a superposition of querying the bomb and not querying it, with only ε amplitude on querying the bomb, and √(1-ε2) amplitude on not querying it.  And suppose we repeat this over and over—each time, moving ε amplitude onto the “query the bomb” state if there’s no bomb there, but moving ε2 probability onto the “query the bomb” state if there is a bomb (since the explosion decoheres the superposition).  Then after 1/ε repetitions, we’ll have order 1 probability of being in the “query the bomb” state if there’s no bomb.  By contrast, if there is a bomb, then the total probability we’ve ever entered that state is (1/ε)×ε2 = ε.  So, either way, we learn whether there’s a bomb, and the probability that we set the bomb off can be made arbitrarily small.  (Incidentally, this is extremely closely related to how Grover’s algorithm works.)

 

OK, now how about the Vaidman brain?  We’ve got a quantum subroutine simulating your brain, and we want to ask it a yes-or-no question.  We do so by querying that subroutine with ε amplitude 1/ε times, in such a way that if your answer is “yes,” then we’ve only ever activated the subroutine with total probability ε.  Yet you still manage to communicate your “yes” answer to the outside world.  So, should we say that you were conscious only in the ε fraction of the wavefunction where the simulation happened, or that the entire system was conscious?  (The answer could matter a lot for anthropic purposes.)

To sum up: Brian’s notion that consciousness is the same as computation raises more issues than it solves; in particular, the possibility that if suffering is computable, it may also be uncomputable/reversible, would suggest s-risks aren’t as serious as FRI treats them.

Objection 8: Dangerous combination

Three themes which seem to permeate FRI’s research are:

(1) Suffering is the thing that is bad.

(2) It’s critically important to eliminate badness from the universe.

(3) Suffering is impossible to define objectively, and so we each must define what suffering means for ourselves.

Taken individually, each of these seems reasonable. Pick two, and you’re still okay. Pick all three, though, and you get A Fully General Justification For Anything, based on what is ultimately a subjective/aesthetic call.

Much can be said in FRI’s defense here, and it’s unfair to single them out as risky: in my experience they’ve always brought a very thoughtful, measured, cooperative approach to the table. I would just note that ideas are powerful, and I think theme (3) is especially pernicious if incorrect.

III. QRI’s alternative

Analytic functionalism is essentially a negative hypothesis about consciousness: it’s the argument that there’s no order to be found, no rigor to be had. It obscures this with talk of “function”, which is a red herring it not only doesn’t define, but admits is undefinable. It doesn’t make any positive assertion. Functionalism is skepticism- nothing more, nothing less.

But is it right?

Ultimately, I think these a priori arguments are much like people in the middle ages arguing whether one could ever formalize a Proper System of Alchemy. Such arguments may in many cases hold water, but it’s often difficult to tell good arguments apart from arguments where we’re just cleverly fooling ourselves. In retrospect, the best way to *prove* systematized alchemy was possible was to just go out and *do* it, and invent Chemistry. That’s how I see what we’re doing at QRI with Qualia Formalism: we’re assuming it’s possible to build stuff, and we’re working on building the object-level stuff.

What we’ve built with QRI’s framework

Note: this is a brief, surface-level tour of our research; it will probably be confusing for readers who haven’t dug into our stuff before. Consider this a down-payment on a more substantial introduction.

My most notable work is Principia Qualia, in which I lay out my meta-framework for consciousness (a flavor of dual-aspect monism, with a focus on Qualia Formalism) and put forth the Symmetry Theory of Valence (STV). Essentially, the STV is an argument that much of the apparent complexity of emotional valence is evolutionarily contingent, and if we consider a mathematical object isomorphic to a phenomenological experience, the mathematical property which corresponds to how pleasant it is to be that experience is the object’s symmetry. This implies a bunch of testable predictions and reinterpretations of things like what ‘pleasure centers’ do (Section XI; Section XII). Building on this, I offer the Symmetry Theory of Homeostatic Regulation, which suggests understanding the structure of qualia will translate into knowledge about the structure of human intelligence, and I briefly touch on the idea of Neuroacoustics.

Likewise, my colleague Andrés Gómez Emilsson has written about the likely mathematics of phenomenology, including The Hyperbolic Geometry of DMT Experiences, Tyranny of the Intentional Object, and Algorithmic Reduction of Psychedelic States. If I had to suggest one thing to read in all of these links, though, it would be the transcript of his recent talk on Quantifying Bliss, which lays out the world’s first method to objectively measure valence from first principles (via fMRI) using Selen Atasoy’s Connectome Harmonics framework, the Symmetry Theory of Valence, and Andrés’s CDNS model of experience.

These are risky predictions and we don’t yet know if they’re right, but we’re confident that if there is some elegant structure intrinsic to consciousness, as there is in many other parts of the natural world, these are the right kind of risks to take.

I mention all this because I think analytic functionalism- which is to say radical skepticism/eliminativism, the metaphysics of last resort- only looks as good as it does because nobody’s been building out any alternatives.

IV. Closing thoughts

FRI is pursuing a certain research agenda, and QRI is pursuing another, and there’s lots of value in independent explorations of the nature of suffering. I’m glad FRI exists, everybody I’ve interacted with at FRI has been great, I’m happy they’re focusing on s-risks, and I look forward to seeing what they produce in the future.

On the other hand, I worry that nobody’s pushing back on FRI’s metaphysics, which seem to unavoidably lead to the intractable problems I describe above. FRI seems to believe these problems are part of the territory, unavoidable messes that we just have to make philosophical peace with. But I think that functionalism is a bad map, that the metaphysical messes it leads to are much worse than most people realize (fatal to FRI’s mission), and there are other options that avoid these problems (which, to be fair, is not to say they have no problems).

Ultimately, FRI doesn’t owe me a defense of their position. But if they’re open to suggestions on what it would take to convince a skeptic like me that their brand of functionalism is viable, or at least rescuable, I’d offer the following:

Re: Objection 1 (motte-and-bailey), I suggest FRI should be as clear and complete as possible in their basic definition of suffering. In which particular ways is it ineffable/fuzzy, and in which particular ways is it precise? What can we definitely say about suffering, and what can we definitely never determine? Preregistering ontological commitments and methodological possibilities would help guard against FRI’s definition of suffering changing based on context.

Re: Objection 2 (intuition duels), FRI may want to internally “war game” various future scenarios involving AGI, WBE, etc, with one side arguing that a given synthetic (or even extraterrestrial) organism is suffering, and the other side arguing that it isn’t. I’d expect this would help diagnose what sorts of disagreements future theories of suffering will need to adjudicate, and perhaps illuminate implicit ethical intuitions. Sharing the results of these simulated disagreements would also be helpful in making FRI’s reasoning less opaque to outsiders, although making everything transparent could lead to certain strategic disadvantages.

Re: Objection 3 (convergence requires common truth), I’d like FRI to explore exactly what might drive consilience/convergence in theories of suffering, and what precisely makes one theory of suffering better than another, and ideally to evaluate a range of example theories of suffering under these criteria.

Re: Objection 4 (assuming that consciousness is a reification produces more confusion, not less), I would love to see a historical treatment of reification: lists of reifications which were later dissolved (e.g., élan vital), vs scattered phenomena that were later unified (e.g., electromagnetism). What patterns do the former have, vs the latter, and why might consciousness fit one of these buckets better than the other?

Re: Objection 5 (the Hard Problem of Consciousness is a red herring), I’d like to see a more detailed treatment of what kinds of problem people have interpreted the Hard Problem as, and also more analysis on the prospects of Qualia Formalism (which I think is the maximally-empirical, maximally-charitable interpretation of the Hard Problem). It would be helpful for us, in particular, if FRI preregistered their expectations about QRI’s predictions, and their view of the relative evidence strength of each of our predictions.

Re: Objection 6 (mapping to reality), this is perhaps the heart of most of our disagreement. From Brian’s quotes, he seems split on this issue; I’d like clarification about whether he believes we can ever precisely/objectively map specific computations to specific physical systems, and vice-versa. And if so— how? If not, this seems to propagate through FRI’s ethical framework in a disastrous way, since anyone can argue that any physical system does, or does not, ‘code’ for massive suffering, and there’s no principled way to derive any ‘ground truth’ or even pick between interpretations in a principled way (e.g. my popcorn example). If this isn’t the case— why not?

Brian has suggested that “certain high-level interpretations of physical systems are more ‘natural’ and useful than others” (personal communication); I agree, and would encourage FRI to explore systematizing this.

It would be non-trivial to port FRI’s theories and computational intuitions to the framework of “hypercomputation”– i.e., the understanding that there’s a formal hierarchy of computational systems, and that Turing machines are only one level of many– but it may have benefits too. Namely, it might be the only way they could avoid Objection 6 (which I think is a fatal objection) while still allowing them to speak about computation & consciousness in the same breath. I think FRI should look at this and see if it makes sense to them.

Re: Objection 7 (FRI doesn’t fully bite the bullet on computationalism), I’d like to see responses to Aaronson’s aforementioned thought experiments.

Re: Objection 8 (dangerous combination), I’d like to see a clarification about why my interpretation is unreasonable (as it very well may be!).

 


In conclusion- I think FRI has a critically important goal- reduction of suffering & s-risk. However, I also think FRI has painted itself into a corner by explicitly disallowing a clear, disagreement-mediating definition for what these things are. I look forward to further work in this field.

 

Mike Johnson

Qualia Research Institute


Acknowledgements: thanks to Andrés Gómez Emilsson, Brian Tomasik, and Max Daniel for reviewing earlier drafts of this.

Sources:

My sources for FRI’s views on consciousness:
Flavors of Computation are Flavors of Consciousness:
https://foundational-research.org/flavors-of-computation-are-flavors-of-consciousness/
Is There a Hard Problem of Consciousness?
http://reducing-suffering.org/hard-problem-consciousness/
Consciousness Is a Process, Not a Moment
http://reducing-suffering.org/consciousness-is-a-process-not-a-moment/
How to Interpret a Physical System as a Mind
http://reducing-suffering.org/interpret-physical-system-mind/
Dissolving Confusion about Consciousness
http://reducing-suffering.org/dissolving-confusion-about-consciousness/
Debate between Brian & Mike on consciousness:
https://www.facebook.com/groups/effective.altruists/permalink/1333798200009867/?comment_id=1333823816673972&comment_tracking=%7B%22tn%22%3A%22R9%22%7D
Max Daniel’s EA Global Boston 2017 talk on s-risks:
https://www.youtube.com/watch?v=jiZxEJcFExc
Multipolar debate between Eliezer Yudkowsky and various rationalists about animal suffering:
https://rationalconspiracy.com/2015/12/16/a-debate-on-animal-consciousness/
The Internet Encyclopedia of Philosophy on functionalism:
http://www.iep.utm.edu/functism/
Gordon McCabe on why computation doesn’t map to physics:
http://philsci-archive.pitt.edu/1891/1/UniverseCreationComputer.pdf
Toby Ord on hypercomputation, and how it differs from Turing’s work:
https://arxiv.org/abs/math/0209332
Luke Muehlhauser’s OpenPhil-funded report on consciousness and moral patienthood:
http://www.openphilanthropy.org/2017-report-consciousness-and-moral-patienthood
Scott Aaronson’s thought experiments on computationalism:
http://www.scottaaronson.com/blog/?p=1951
Selen Atasoy on Connectome Harmonics, a new way to understand brain activity:
https://www.nature.com/articles/ncomms10340
My work on formalizing phenomenology:
My meta-framework for consciousness, including the Symmetry Theory of Valence:
http://opentheory.net/PrincipiaQualia.pdf
My hypothesis of homeostatic regulation, which touches on why we seek out pleasure:
http://opentheory.net/2017/05/why-we-seek-out-pleasure-the-symmetry-theory-of-homeostatic-regulation/
My exploration & parametrization of the ‘neuroacoustics’ metaphor suggested by Atasoy’s work:
http://opentheory.net/2017/06/taking-brain-waves-seriously-neuroacoustics/
My colleague Andrés’s work on formalizing phenomenology:
A model of DMT-trip-as-hyperbolic-experience:
https://qualiacomputing.com/2017/05/28/eli5-the-hyperbolic-geometry-of-dmt-experiences/
June 2017 talk at Consciousness Hacking, describing a theory and experiment to predict people’s valence from fMRI data:
https://qualiacomputing.com/2017/06/18/quantifying-bliss-talk-summary/
A parametrization of various psychedelic states as operators in qualia space:
https://qualiacomputing.com/2016/06/20/algorithmic-reduction-of-psychedelic-states/
A brief post on valence and the fundamental attribution error:
https://qualiacomputing.com/2016/11/19/the-tyranny-of-the-intentional-object/
A summary of some of Selen Atasoy’s current work on Connectome Harmonics:
https://qualiacomputing.com/2017/06/18/connectome-specific-harmonic-waves-on-lsd/

Principia Qualia: Part II – Valence

Extract from Principia Qualia (2016) by my colleague Michael E. Johnson (from Qualia Research Institute). This is intended to summarize the core ideas of chapter 2, which proposes a precise, testable, simple, and so far science-compatible theory of the fundamental nature of valence (also called hedonic tone or the pleasure-pain axis; what makes experiences feel good or bad).

 

VII. Three principles for a mathematical derivation of valence

We’ve covered a lot of ground with the above literature reviews, and synthesizing a new framework for understanding consciousness research. But we haven’t yet fulfilled the promise about valence made in Section II- to offer a rigorous, crisp, and relatively simple hypothesis about valence. This is the goal of Part II.

Drawing from the framework in Section VI, I offer three principles to frame this problem: ​

1. Qualia Formalism: for any given conscious experience, there exists- in principle- a mathematical object isomorphic to its phenomenology. This is a formal way of saying that consciousness is in principle quantifiable- much as electromagnetism, or the square root of nine is quantifiable. I.e. IIT’s goal, to generate such a mathematical object, is a valid one.

2. Qualia Structuralism: this mathematical object has a rich set of formal structures. Based on the regularities & invariances in phenomenology, it seems safe to say that qualia has a non-trivial amount of structure. It likely exhibits connectedness (i.e., it’s a unified whole, not the union of multiple disjoint sets), and compactness, and so we can speak of qualia as having a topology.

More speculatively, based on the following:

(a) IIT’s output format is data in a vector space,

(b) Modern physics models reality as a wave function within Hilbert Space, which has substantial structure,

(c) Components of phenomenology such as color behave as vectors (Feynman 1965), and

(d) Spatial awareness is explicitly geometric,

…I propose that Qualia space also likely satisfies the requirements of being a metric space, and we can speak of qualia as having a geometry.

Mathematical structures are important, since the more formal structures a mathematical object has, the more elegantly we can speak about patterns within it, and the closer our words can get to “carving reality at the joints”. ​

3. Valence Realism: valence is a crisp phenomenon of conscious states upon which we can apply a measure.

–> I.e. some experiences do feel holistically better than others, and (in principle) we can associate a value to this. Furthermore, to combine (2) and (3), this pleasantness could be encoded into the mathematical object isomorphic to the experience in an efficient way (we should look for a concise equation, not an infinitely-large lookup table for valence). […]

valence_structuralism

I believe my three principles are all necessary for a satisfying solution to valence (and the first two are necessary for any satisfying solution to consciousness):

Considering the inverses:

If Qualia Formalism is false, then consciousness is not quantifiable, and there exists no formal knowledge about consciousness to discover. But if the history of science is any guide, we don’t live in a universe where phenomena are intrinsically unquantifiable- rather, we just haven’t been able to crisply quantify consciousness yet.

If Qualia Structuralism is false and Qualia space has no meaningful structure to discover and generalize from, then most sorts of knowledge about qualia (such as which experiences feel better than others) will likely be forever beyond our empirical grasp. I.e., if Qualia space lacks structure, there will exist no elegant heuristics or principles for interpreting what a mathematical object isomorphic to a conscious experience means. But this doesn’t seem to match the story from affective neuroscience, nor from our everyday experience: we have plenty of evidence for patterns, regularities, and invariances in phenomenological experiences. Moreover, our informal, intuitive models for predicting our future qualia are generally very good. This implies our brains have figured out some simple rules-of-thumb for how qualia is structured, and so qualia does have substantial mathematical structure, even if our formal models lag behind.

If Valence Realism is false, then we really can’t say very much about ethics, normativity, or valence with any confidence, ever. But this seems to violate the revealed preferences of the vast majority of people: we sure behave as if some experiences are objectively superior to others, at arbitrarily-fine levels of distinction. It may be very difficult to put an objective valence on a given experience, but in practice we don’t behave as if this valence doesn’t exist.

[…]

VIII. Distinctions in qualia: charting the explanation space for valence

Sections II-III made the claim that we need a bottom-up quantitative theory like IIT in order to successfully reverse-engineer valence, Section VI suggested some core problems & issues theories like IIT will need to address, and Section VII proposed three principles for interpreting IIT-style output:

  1. We should think of qualia as having a mathematical representation,
  2. This mathematical representation has a topology and probably a geometry, and perhaps more structure, and
  3. Valence is real; some things do feel better than others, and we should try to explain why in terms of qualia’s mathematical representation.

But what does this get us? Specifically, how does assuming these three things get us any closer to solving valence if we don’t have an actual, validated dataset (“data structure isomorphic to the phenomenology”) from *any* system, much less a real brain?

It actually helps a surprising amount, since an isomorphism between a structured (e.g., topological, geometric) space and qualia implies that any clean or useful distinction we can make in one realm automatically applies in the other realm as well. And if we can explore what kinds of distinctions in qualia we can make, we can start to chart the explanation space for valence (what ‘kind’ of answer it will be).

I propose the following four distinctions which depend on only a very small amount of mathematical structure inherent in qualia space, which should apply equally to qualia and to qualia’s mathematical representation:

  1. Global vs local
  2. Simple vs complex
  3. Atomic vs composite
  4. Intuitively important vs intuitively trivial

[…]

Takeaways: this section has suggested that we can get surprising mileage out of the hypothesis that there will exist a geometric data structure isomorphic to the phenomenology of a system, since if we can make a distinction in one domain (math or qualia), it will carry over into the other domain ‘for free’. Given this, I put forth the hypothesis that valence may plausibly be a simple, global, atomic, and intuitively important property of both qualia and its mathematical representation.

IX. Summary of heuristics for reverse-engineering the pattern for valence

Reverse-engineering the precise mathematical property that corresponds to valence may seem like finding a needle in a haystack, but I propose that it may be easier than it appears. Broadly speaking, I see six heuristics for zeroing in on valence:

A. Structural distinctions in Qualia space (Section VIII);

B. Empirical hints from affective neuroscience (Section I);

C. A priori hints from phenomenology;

D. Empirical hints from neurocomputational syntax;

E. The Non-adaptedness Principle;

F. Common patterns across physical formalisms (lessons from physics). None of these heuristics determine the answer, but in aggregate they dramatically reduce the search space.

IX.A: Structural distinctions in Qualia space (Section VIII):

In the previous section, we noted that the following distinctions about qualia can be made: Global vs local; Simple vs complex; Atomic vs composite; Intuitively important vs intuitively trivial. Valence plausibly corresponds to a global, simple, atomic, and intuitively important mathematical property.

[…]

Music is surprisingly pleasurable; auditory dissonance is surprisingly unpleasant. Clearly, music has many adaptive signaling & social bonding aspects (Storr 1992; Mcdermott and Hauser 2005)- yet if we subtract everything that could be considered signaling or social bonding (e.g., lyrics, performative aspects, social bonding & enjoyment), we’re still left with something very emotionally powerful. However, this pleasantness can vanish abruptly- and even reverse– if dissonance is added.

Much more could be said here, but a few of the more interesting data points are:

  1. Pleasurable music tends to involve elegant structure when represented geometrically (Tymoczko 2006);
  2. Non-human animals don’t seem to find human music pleasant (with some exceptions), but with knowledge of what pitch range and tempo their auditory systems are optimized to pay attention to, we’ve been able to adapt human music to get animals to prefer it over silence (Snowdon and Teie 2010).
  3. Results suggest that consonance is a primary factor in which sounds are pleasant vs unpleasant in 2- and 4-month-old infants (Trainor, Tsang, and Cheung 2002).
  4. Hearing two of our favorite songs at once doesn’t feel better than just one; instead, it feels significantly worse.

More generally, it feels like music is a particularly interesting case study by which to pick apart the information-theoretic aspects of valence, and it seems plausible that evolution may have piggybacked on some fundamental law of qualia to produce the human preference for music. This should be most obscured with genres of music which focus on lyrics, social proof & social cohesion (e.g., pop music), and performative aspects, and clearest with genres of music which avoid these things (e.g., certain genres of classical music).

[…]

X. A simple hypothesis about valence

To recap, the general heuristic from Section VIII was that valence may plausibly correspond to a simple, atomic, global, and intuitively important geometric property of a data structure isomorphic to phenomenology. The specific heuristics from Section IX surveyed hints from a priori phenomenology, hints from what we know of the brain’s computational syntax, introduced the Non-adaptedness Principle, and noted the unreasonable effectiveness of beautiful mathematics in physics to suggest that the specific geometric property corresponding to pleasure should be something that involves some sort of mathematically-interesting patterning, regularity, efficiency, elegance, and/or harmony.

We don’t have enough information to formally deduce which mathematical property these constraints indicate, yet in aggregate these constraints hugely reduce the search space, and also substantially point toward the following:

Given a mathematical object isomorphic to the qualia of a system, the mathematical property which corresponds to how pleasant it is to be that system is that object’s symmetry.

[…]

XI. Testing this hypothesis today

In a perfect world, we could plug many peoples’ real-world IIT-style datasets into a symmetry detection algorithm and see if this “Symmetry in the Topology of Phenomenology” (SiToP) theory of valence successfully predicted their self-reported valences.

Unfortunately, we’re a long way from having the theory and data to do that.

But if we make two fairly modest assumptions, I think we should be able to perform some reasonable, simple, and elegant tests on this hypothesis now. The two assumptions are:

  1. We can probably assume that symmetry/pleasure is a more-or-less fractal property: i.e., it’ll be evident on basically all locations and scales of our data structure, and so it should be obvious even with imperfect measurements. Likewise, symmetry in one part of the brain will imply symmetry elsewhere, so we may only need to measure it in a small section that need not be directly contributing to consciousness.
  2. We can probably assume that symmetry in connectome-level brain networks/activity will roughly imply symmetry in the mathematical-object-isomorphic-to-phenomenology (the symmetry that ‘matters’ for valence), and vice-versa. I.e., we need not worry too much about the exact ‘flavor’ of symmetry we’re measuring.

So- given these assumptions, I see three ways to test our hypothesis:

1. More pleasurable brain states should be more compressible (all else being equal).

Symmetry implies compressibility, and so if we can measure the compressibility of a brain state in some sort of broad-stroke fashion while controlling for degree of consciousness, this should be a fairly good proxy for how pleasant that brain state is.

[…]

2. Highly consonant/harmonious/symmetric patterns injected directly into the brain should feel dramatically better than similar but dissonant patterns.

Consonance in audio signals generally produces positive valence; dissonance (e.g., nails-on-a-chalkboard) reliably produces negative valence. This obviously follows from our hypothesis, but it’s also obviously true, so we can’t use it as a novel prediction. But if we take the general idea and apply it to unusual ways of ‘injecting’ a signal into the brain, we should be able to make predictions that are (1) novel, and (2) practically useful.

TMS is generally used to disrupt brain functions by oscillating a strong magnetic field over a specific region to make those neurons fire chaotically. But if we used it on a lower-powered, rhythmic setting to ‘inject’ a symmetric/consonant pattern directly into parts of the brain involved directly with consciousness, the result should produce good feeling- or at least, much better valence than a similar dissonant pattern.

Our specific prediction: direct, low-power, rhythmic stimulation (via TMS) of the thalamus at harmonic frequencies (e.g., @1hz+2hz+4hz+6hz+8hz+12hz+16hz+24hz+36hz+48hz+72hz+96hz+148hz) should feel significantly more pleasant than similar stimulation at dissonant frequencies (e.g., @1.01hz+2.01hz+3.98hz+6.02hz+7.99hz+12.03hz+16.01hz+24.02hz+35.97hz+48.05hz+72.04hz+95.94hz+ 147.93hz).

[…]

3. More consonant vagus nerve stimulation (VNS) should feel better than dissonant VNS.

The above harmonics-based TMS method would be a ‘pure’ test of the ‘Symmetry in the Topology of Phenomenology’ (SiToP) hypothesis. It may rely on developing custom hardware and is also well outside of my research budget.

However, a promising alternative method to test this is with consumer-grade vagus nerve stimulation (VNS) technology. Nervana Systems has an in-ear device which stimulates the Vagus nerve with rhythmic electrical pulses as it winds its way past the left ear canal. The stimulation is synchronized with either user-supplied music or ambient sound. This synchronization is done, according to the company, in order to mask any discomfort associated with the electrical stimulation. The company says their system works by “electronically signal[ing] the Vagus nerve which in turn stimulates the release of neurotransmitters in the brain that enhance mood.”

This explanation isn’t very satisfying, since it merely punts the question of why these neurotransmitters enhance mood, but their approach seems to work– and based on the symmetry/harmony hypothesis we can say at least something about why: effectively, they’ve somewhat accidentally built a synchronized bimodal approach (coordinated combination of music+VNS) for inducing harmony/symmetry in the brain. This is certainly not the only component of how this VNS system functions, since the parasympathetic nervous system is both complex and powerful by itself, but it could be an important component.

Based on our assumptions about what valence is, we can make a hierarchy of predictions:

  1. Harmonious music + synchronized VNS should feel the best;
  2. Harmonious music + placebo VNS (unsynchronized, simple pattern of stimulation) should feel less pleasant than (1);
  3. Harmonious music + non-synchronized VNS (stimulation that is synchronized to a different kind of music) should feel less pleasant than (1);
  4. Harmonious music + dissonant VNS (stimulation with a pattern which scores low on consonance measures such as (Chon 2008) should feel worse than (2) and (3));
  5. Dissonant auditory noise + non-synchronized, dissonant VNS should feel pretty awful.

We can also predict that if a bimodal approach for inducing harmony/symmetry in the brain is better than a single modality, a trimodal or quadrimodal approach may be even more effective. E.g., we should consider testing the addition of synchronized rhythmic tactile stimulation and symmetry-centric music visualizations. A key question here is whether adding stimulation modalities would lead to diminishing or synergistic/accelerating returns.

Raising the Table Stakes for Successful Theories of Consciousness

What should we expect out of a theory of consciousness?

For a scientific theory of consciousness to have even the slightest chance at being correct it must be able to address- at the very least– the following four questions*:

  1. Why consciousness exists at all (i.e. “the hard problem“; why we are not p-zombies)
  2. How it is possible to experience multiple pieces of information at once in a unitary moment of experience (i.e. the phenomenal binding problem; the boundary problem)
  3. How consciousness exerts the causal power necessary to be recruited by natural selection and allow us to discuss its existence (i.e. the problem of causal impotence vs. causal overdetermination)
  4. How and why consciousness has its countless textures (e.g. phenomenal color, smell, emotions, etc.) and the interdependencies of their different values (i.e. the palette problem)

In addition the theory must be able to generate experimentally testable predictions. In Popper’s sense the theory must make “risky” predictions. In a Bayesian sense the theory must be able to generate predictions that have a much higher likelihood given that the theory is correct versus not so that the a posteriori probabilities of the different hypotheses are substantially different from their priors once the experiment is actually carried out.

As discussed in a previous article most contemporary philosophies of mind are unable to address one or more of these four problems (or simply fail to make any interesting predictions). David Pearce’s non-materialist physicalist idealism (not the schizophrenic word-salad that may seem at first) is one of the few theories that promises to meet this criteria and makes empirical predictions. This theory addresses the above questions in the following way:

(1) Why does consciousness exist?

Consciousness exists because reality is made of qualia. In particular, one might say that physics is implicitly the science that studies the flux of qualia. This would imply that in fact all that exists is a set of experiences whose interrelationships are encoded in the Universal Wavefunction of Quantum Field Theory. Thus we are collapsing two questions (“why does consciousness arise in our universe?” and “why does the universe exist?”) into a single question (“why does anything exist?”). More so, the question “why does anything exist?” may ultimately be solved with Zero Ontology. In other words, all that exists is implied by the universe having literally no information whatsoever. All (apparent) information is local; universally we live in an information-less quantum Library of Babel.

(2) Why and how is consciousness unitary?

Due to the expansion of the universe the universal wavefunction has topological bifurcations that effectively create locally connected networks of quantum entanglement that are unconnected to the rest of reality. These networks meet the criteria of being ontologically unitary while having the potential to hold multiple pieces of information at once. In other words, Pearce’s theory of consciousness postulates that the world is made of a large number of experiences, though the vast majority of them are incredibly tiny and short-lived. The overwhelming bulk of reality is made of decohered micro-experiences which are responsible for most of the phenomena we see in the macroscopic world ranging from solidity to Newton’s laws of motion.

A few of these networks of entanglement are us: you, right now, as a unitary “thin subject” of experience, according to this theory, are one of these networks (cf. Mereological Nihilism). Counter-intuitively, while a mountain is in some sense much bigger than yourself, at a deeper level you are bigger than the biggest object you will find in a mountain. Taking seriously the phenomenal binding problem we have to conclude that a mountain is for the most part just made of fields of decohered qualia, and thus, unlike a given biologically-generated experience, it is not “a unitary subject of experience”. In order to grasp this point it is necessary to contemplate a very extreme generalization of Empty Individualism: not only is it that every moment of a person’s experience is a different subject of experience, but the principle applies to every single network of entanglement in the entire multiverse. Only a tiny minority of these have anything to do with minds representing worlds. And even those that participate in the creation of a unitary experience exist within an ecosystem that gives rise to an evolutionary process in which quintillions of slightly different entanglement networks compete in order to survive in the extreme environments provided by nervous systems. Your particular experience is an entanglement network that evolved in order to survive in the specific brain state that is present right now. In other words, macroscopic experiences are the result of harnessing the computational power of Quantum Darwinism by applying it to a very particular configuration of the CNS. Brain states themselves encode Constraint Satisfaction Problems with the networks of electric gradients across firing neurons in sub-millisecond scales instantiating constraints whose solutions are found with sub-femtosecond quantum darwinism.

(3) How can consciousness be causally efficacious?

Consciousness exerts its causal power by virtue of being the only thing that exists. If anything is causal at all, it must, in the final analysis, be consciousness. No matter one’s ultimate theory of causality, assuming that physics describes the flux of qualia, then what instantiates such causality has to be this very flux.

Even under Eternalism/block view of the universe/Post-Everettian QM you can still meaningfully reconstruct causality in terms of the empirical rules for statistical independence across certain dimensions of fundamental reality. The dimensions that have time-like patterns of statistical independence will subjectively be perceived as being the arrows of time in the multiverse (cf. Timeless Causality).

Now an important caveat with this view of the relationship between qualia and causality is that it seems as if at least a weak version of epiphenomenalism must be true. The inverted spectrum thought experiment (ironically usually used in favor of the existence of qualia) can be used to question the causal power of qualia. This brings us to the fourth point:

(4) How do we explain the countless textures of consciousness?

How and why does consciousness have its countless textures and what determines its interrelationships? Pearce anticipates that someday we will have a Rosetta Stone for translating patterns of entanglement in quantum fields to corresponding varieties of qualia (e.g. colors, smells, sounds, etc.). Now, admittedly it seems far fetched that the different quantum fields and their interplay will turn out to be the source of the different qualia varieties. But is there something that in principle precludes this ontological correspondence? Yes, there are tremendous philosophical challenges here, the most salient of which might be the “being/form boundary”. This is the puzzle concerning why states of being (i.e. networks of entangled qualia) would act a certain way by virtue of their phenomenal character in and of itself (assuming its phenomenal character is what gives them reality to begin with). Indeed, what could possibly attach at a fundamental level the behavior of a given being and its intrinsic subjective texture? A compromise between full-fledged epiphenomenalism and qualia-based causality is to postulate a universal principle concerning the preference for full-spectrum states over highly differentiated ones. Consider for example how negative and positive electric charge “seek to cancel each other out”. Likewise, the Quantum Chromodynamics of quarks inside protons and neutrons works under a similar but generalized principle: color charges seek to cancel/complement each other out and become “white” or “colorless”. This principle would suggest that the causal power of specific qualia values comes from the gradient ascent towards more full-spectrum-like states rather than from the specific qualia values on their own.  If this were to be true, one may legitimately wonder whether hedonium and full-spectrum states are perhaps one and the same thing (cf. Valence structuralism). In some way this account of the “being/form boundary” is similar to process philosophy,  but unlike process philosophy, here we are also taking mereological nihilism and wavefuction monism seriously.

However far-fetched it may be to postulate intrinsic causal properties for qualia values, if the ontological unity of science is to survive, there might be no other option. As we’ve seen, simple “patterns of computation” or “information processing” cannot be the source of qualia, since nothing that isn’t a quantum coherent wavefunction actually has independent existence. Unitary minds cannot supervene on decohered quantum fields. Thus the various kinds of qualia have to be searched for in networks of quantum entanglement; within a physicalist paradigm there is nowhere else for them to be.

Alternative Theories

I am very open to the possibility that other theories of consciousness are able to address these four questions. I have yet to see any evidence of this, though. But, please, change my mind if you can! Does your theory of consciousness rise to the challenge?


* This particular set of criteria was proposed by David Pearce (cf. Qualia Computing in Tucson). I would agree with him that these are crucial questions; indeed they make up the bare minimum that such a theory must satisfy. That said, we can formulate more comprehensive sets of problems to solve. An alternative framework that takes this a little further can be found in Michael Johnson’s book Principia Qualia (Eight Problems for a New Science of Consciousness).

Qualia Computing Attending the 2017 Psychedelic Science Conference

From the 19th to the 24th of April I will be hanging out at Psychedelic Science 2017 (if you are interested in attending but have not bought the tickets: remember you can register until the 14th of February).

12020058_10206806127125111_5414514709501746096_nIn case you enjoy Qualia Computing and you are planning on going, now you can meet the human who is mostly responsible for these articles. I am looking forward to meeting a lot of awesome researchers. If you see me and enjoy what I do, don’t be afraid to say hi.

Why Care About Psychedelics?

Although the study of psychedelics and their effects is not a terminal value here in Qualia Computing, they are instrumental in achieving the main goals. The core philosophy of Qualia Computing is to (1) map out the state-space of possible experiences, (2) identify the computational properties of consciousness, and (3) reverse-engineer valence so as to find the way to stay positive without going insane.

Psychedelic experiences happen to be very informative and useful in making progress towards these three goals. The quality and magnitude of the consciousness alteration that they induce lends itself to exploring these questions. First, the state-space of humanly accessible experiences is greatly amplified once you add psychedelics into the mix. Second, the nature of these experiences is all but computationally dull (cf. alcohol and opioids). On the contrary, psychedelic experiences involve non-ordinary forms of qualia computing: the textures of consciousness interact in non-trivial ways, and it stands to reason that some combinations of these textures will be recruited in the future for more than aesthetic purposes. They will be used for computational purposes, too. And third, psychedelic states greatly amplify the range of valence (i.e. the maximum intensity of both pain and pleasure). They unlock the possibility of experiencing peak bliss as well as intense suffering. This strongly suggests that whatever underpins valence at the fundamental level, psychedelics are able to amplify it to a fantastic (and terrifying) extent. Thus, serious valence research will undoubtedly benefit from psychedelic science.

It is for this reason that psychedelics have been a major topic explored here since the beginning of this project. Here is a list of articles that directly deal with the subject:

List of Qualia Computing Psychedelic Articles

1) Psychophysics For Psychedelic Research: Textures

How do you make a psychophysical experiment that tells you something foundational about the information-processing properties of psychedelic perception? I proposed to use an experimental approach invented by Benjamin J. Balas based on the anatomically-inspired texture analysis and synthesis techniques developed by Eero Simoncelli. In brief, one seeks to determine which summary statistics are sufficient to create perceptual (textural) metamers. In turn, in the context of psychedelic research, this can help us determine which statistical properties are best discriminated while sober and which differences are amplified while on psychedelics.

2) State-Space of Drug Effects

I distributed a survey in which I asked people to rate drug experiences along 60 different dimensions. I then conducted factor analysis on these responses. This way I empirically derived six major latent traits that account more than half of the variance across all drug experiences. Three of these factors are tightly related to valence, which suggests that hedonic-recalibration might benefit from a multi-faceted approach.

3) How to Secretly Communicate with People on LSD

I suggest that control interruption (i.e. the failure of feedback inhibition during psychedelic states) can be employed to transmit information in a secure way to people who are in other states of consciousness. A possible application of this technology might be: You and your friends at Burning Man want to send a secret message to every psychedelic user on a particular camp in such a way that no infiltrated cop is able to decode it. To do so you could instantiate the techniques outlined in this article on a large LED display.

4) The Hyperbolic Geometry of DMT Experiences: Symmetries, Sheets, and Saddled Scenes

This article discusses the phenomenology of DMT states from the point of view of differential geometry. In particular, an argument is provided in favor of the view that high grade psychedelia usually involves a sort of geometric hyperbolization of phenomenal space.

5) LSD and Quantum Measurements: Can you see Schrödinger’s cat both dead and alive on acid?

We provide an empirical method to test the (extremely) wild hypothesis that it is possible to experience “multiple branches of the multiverse at once” on high doses of psychedelics. The point is not to promote a particular interpretation of such experiences. Rather, the points is that we can actually generate predictions from such interpretations and then go ahead and test them.

6) Algorithmic Reduction of Psychedelic States

People report a zoo of psychedelic effects. However, as in most things in life, there may be a relatively small number of basic effects that, when combined, can account for the wide variety of phenomena we actually observe. Algorithmic reductions are proposed as a conceptual framework for analyzing psychedelic experiences. Four candidate main effects are proposed.

7) Peaceful Qualia: The Manhattan Project of Consciousness

Imagine that there was a world-wide effort to identify the varieties of qualia that promote joy and prosocial behavior at the same time. Could these be used to guarantee world peace? By giving people free access to the most valuable and prosocial states of consciousness one may end up averting large-scale conflict in a sustainable way. This articles explores how this investigation might be carried out and proposes organizational principles for such a large-scale research project.

8) Getting closer to digital LSD

Why are the Google Deep Dream pictures so trippy? This is not just a coincidence. People call them trippy for a reason.

9) Generalized Wada-Test

In a Wada-test a surgeon puts half of your brain to sleep and evaluates the cognitive skills of your awake half. Then the process is repeated in mirror image. Can we generalize this procedure? Imagine that instead of just putting a hemisphere to sleep we gave it psychedelics. What would it feel like to be tripping, but only with your right hemisphere? Even more generally: envision a scheme in which one alternates a large number of paired states of consciousness and study their mixtures empirically. This way it may be possible to construct a network of “opinions that states of consciousness have about each other”. Could this help us figure out whether there is a universal scale for subjective value (i.e. valence)?

10) Psychedelic Perception of Visual Textures

In this article I discuss some problems with verbal accounts of psychedelic visuals, and I invite readers to look at some textures (provided in the article) and try to describe them while high on LSD, 2C-B, DMT, etc. You can read some of the hilarious comments already left in there.

11) The Super-Shulgin Academy: A Singularity I Can Believe In

Hard to summarize.

 

The Binding Problem

[Our] subjective conscious experience exhibits a unitary and integrated nature that seems fundamentally at odds with the fragmented architecture identified neurophysiologically, an issue which has come to be known as the binding problem. For the objects of perception appear to us not as an assembly of independent features, as might be suggested by a feature based representation, but as an integrated whole, with every component feature appearing in experience in the proper spatial relation to every other feature. This binding occurs across the visual modalities of color, motion, form, and stereoscopic depth, and a similar integration also occurs across the perceptual modalities of vision, hearing, and touch. The question is what kind of neurophysiological explanation could possibly offer a satisfactory account of the phenomenon of binding in perception?
One solution is to propose explicit binding connections, i.e. neurons connected across visual or sensory modalities, whose state of activation encodes the fact that the areas that they connect are currently bound in subjective experience. However this solution merely compounds the problem, for it represents two distinct entities as bound together by adding a third distinct entity. It is a declarative solution, i.e. the binding between elements is supposedly achieved by attaching a label to them that declares that those elements are now bound, instead of actually binding them in some meaningful way.
Von der Malsburg proposes that perceptual binding between cortical neurons is signalled by way of synchronous spiking, the temporal correlation hypothesis (von der Malsburg & Schneider 1986). This concept has found considerable neurophysiological support (Eckhorn et al. 1988, Engel et al. 1990, 1991a, 1991b, Gray et al. 1989, 1990, 1992, Gray & Singer 1989, Stryker 1989). However although these findings are suggestive of some significant computational function in the brain, the temporal correlation hypothesis as proposed, is little different from the binding label solution, the only difference being that the label is defined by a new channel of communication, i.e. by way of synchrony. In information theoretic terms, this is no different than saying that connected neurons posses two separate channels of communication, one to transmit feature detection, and the other to transmit binding information. The fact that one of these channels uses a synchrony code instead of a rate code sheds no light on the essence of the binding problem. Furthermore, as Shadlen & Movshon (1999) observe, the temporal binding hypothesis is not a theory about how binding is computed, but only how binding is signaled, a solution that leaves the most difficult aspect of the problem unresolved.
I propose that the only meaningful solution to the binding problem must involve a real binding, as implied by the metaphorical name. A glue that is supposed to bind two objects together would be most unsatisfactory if it merely labeled the objects as bound. The significant function of glue is to ensure that a force applied to one of the bound objects will automatically act on the other one also, to ensure that the bound objects move together through the world even when one, or both of them are being acted on by forces. In the context of visual perception, this suggests that the perceptual information represented in cortical maps must be coupled to each other with bi-directional functional connections in such a way that perceptual relations detected in one map due to one visual modality will have an immediate effect on the other maps that encode other visual modalities. The one-directional axonal transmission inherent in the concept of the neuron doctrine appears inconsistent with the immediate bi-directional relation required for perceptual binding. Even the feedback pathways between cortical areas are problematic for this function due to the time delay inherent in the concept of spike train integration across the chemical synapse, which would seem to limit the reciprocal coupling between cortical areas to those within a small number of synaptic connections. The time delays across the chemical synapse would seem to preclude the kind of integration apparent in the binding of perception and consciousness across all sensory modalities, which suggests that the entire cortex is functionally coupled to act as a single integrated unit.
— Section 5 of “Harmonic Resonance Theory: An Alternative to the ‘Neuron Doctrine’ Paradigm of Neurocomputation to Address Gestalt properties of perception” by Steven Lehar

The Hyperbolic Geometry of DMT Experiences: Symmetries, Sheets, and Saddled Scenes

[Content Warning: Trying to understand the contents of this essay may be mind-warping. Proceed with caution.]

Friends, right here and now, one quantum away, there is raging a universe of active intelligence that is transhuman, hyperdimensional, and extremely alien.

—Terence McKenna

The Geometry of DMT States

This is an essay on the phenomenology of DMT. The analysis here presented predominantly uses algorithmic, geometric and information-theoretic frameworks, which distinguishes it from purely phenomenological, symbolic, neuroscientific or spiritual accounts. We do not claim to know what ultimately implements the effects here described (i.e. in light of the substrate problem of consciousness), but the analysis does not need to go there in order to have explanatory power. We posit that one can account for a wide array of (apparently diverse) phenomena present on DMT-induced states of consciousness by describing the overall changes in the geometry of one’s spationtemporal representations (what we will call “world-sheets” i.e. 3D + time surfaces; 3D1T for short). The concrete hypothesis is that the network of subjective measurements of distances we experience on DMT (coming from the relationships between the phenomenal objects one experiences in that state) has an overall geometry that can accurately be described as hyperbolic (or hyperbolic-like). In other words, our inner 3D1T world grows larger than is possible to fit in an experiential field with 3D Euclidean phenomenal space (i.e. an experience of dimension R2.5 representing an R3 scene). This results in phenomenal spaces, surfaces, and objects acquiring a mean negative curvature. Of note is that even though DMT produces this effect in the most consistent and intense way, the effect is also present in states of consciousness induced by tryptamines and to a lesser extent in those induced by all other psychedelics.

Conceptual Framework: Algorithmic Reduction

We will use the reduction framework originally proposed in the article Algorithmic Reductions of Psychedelic States. This means that we will be examining how algorithms and processes (as experienced by a subject of experience) can explain the dynamics of people’s phenomenology in DMT states. We do not claim “the substrate of consciousness” is becoming hyperbolic in any literal sense (though we do not discard that possibility). Rather, we interpret the hyperbolic curvature that experience acquires while on DMT as an emergent effect of a series of more general mechanism of action that can work together to change the geometry of a mind. These same mechanisms of action govern the dynamics of other psychedelic experiences; it is the proportion and intensity of the various “basic” effects that lead to the different outcomes observed. In other words, the hyperbolization of phenomenal space may not be a fundamental effect of DMT, but rather, it may be an emergent effect of more simple effects combined (not unlike how our seemingly smooth macroscopic space-time emerges from the jittery yet fundamental interactions that happen in a microscopic high-dimensional quantum foam).

In particular, we will discuss three candidate models for a more fundamental algorithmic reduction: (1) the synergistic effect of control interruption and symmetry detection resulting in a change of the metric of phenomenal space (analogously to how one can measure the geometry of hyperbolic graph embeddings), (2) the mind as a dynamic system with energy sources, sinks and invariants, in which curvature stores potential energy, and (3) a change in the underlying curvature of the micro-structure of consciousness. These models are not mutually-exclusive, and they may turn out to be compatible. More on this later.

What is Hyperbolic Geometry?

Perhaps the clearest way to describe hyperbolic space is to show examples of it:

The picture to the left shows a representation of a “saddle” surface. In geometry, saddle surfaces are 2-dimensional hyperbolic spaces (also called “hyperbolic planes” or H2). For a surface to have “constant curvature” it must look the same at every point. In other words, for a saddle to be a geometric saddle, every point in it must be a “saddle point” (i.e. a point with negative curvature). As you can see, saddles have the property that the angles of a triangle found in them add up to less than 180 degrees (compare that to surfaces with positive curvature such as the 2-sphere, in which the angles of a triangle add up to more than 180 degrees). Generalizing this to higher dimensions, the middle image above shows a cube in H3 (i.e. a hyperbolic space of three dimensions). This cube, since it is in hyperbolic space, has thin edges and pointy corners. More generally, the corners of a polyhedra (and polytopes) will be more pointy in Hn than they are in Rn. This is why you can see in the right image a dodecahedron with right-angled corners, which in this case can tile H3 (cf. Not Knot). Such a thing- people of the past might say- is an insult to the imagination. Times are changing, though, and hyperbolic geometry is now an acceptable subject of conversation.

An important property of hyperbolic spaces is the way in which the area of a circle (or the n-dimensional volume of a hypersphere) increases as a function of its radius. In 2D Euclidean space the area grows quadratically with the radius. But on H2, the area grows exponentially as a function of the radius! As you may imagine, it is easy to get lost in hyperbolic space. A few steps take you to an entirely different scene. More so, your influence over the environment is greatly diminished as a function of distance. For example, the habitable region of solar systems in hyperbolic spaces (i.e.the Goldilocks zone) is extremelly thin. In order to avoid getting burned or freezing to death you would have to place your planet within a very narrow distance range from the center star. Most of what you do in hyperbolic space either stays as local news or is quickly dissipated in an ever-expanding environment.

We Can Only Remember What We Can Reconstruct

We cannot experience H2 or H3 manifolds under normal circumstances, but we can at least represent some aspects of them through partial embeddings (i.e. instantiations as subsets of other spaces preserving properties) and projections into more familiar geometries. It is important to note that such representations will necessarily be flawed. As it turns out, it is notoriously hard to truly embed H2 in Euclidean 3D space, since doing so will necessarily distort some properties of the original H2 space (such as distance, angle, area, local curvature, etc.). As we will discuss further below, this difficulty turns out to be crucial for understanding why DMT experiences are so hard to remember. In order to remember the experience you need to create a faithful and memorable 3D Euclidean embedding of it. Thus, if one happens to experience a hyperbolic object and wants to remember as much of it as possible, one will have to think strategically about how to fold, crunch and deform such object so that it can be fit in compact Euclidean representations.

What about DMT suggests hyperbolic geometry?

Why should we believe that phenomenal space on DMT (and to a lesser extent on other psychedelics) becomes hyperbolic-like?  We will argue that the features people use to describe their trips as well as concrete mathematical observations of such features point directly to hyperbolic geometry. Here is a list of such features (arranged from least to most suggestive… you know, for dramatic effect):

  1. Perception of far-out travel (as we said, small movements in hyperbolic space lead to huge changes in the scene).
  2. Feelings of becoming big (you can fit a lot more inside a circle of radius r in hyperbolic space).
  3. The space experienced is often depicted as “more real and more dense than normal”.
  4. The use of terms like “mind-expanding” and “warping” to describe the effects of the drug are very common.
  5. People describing it as “a different kind of space” and frequently using the word “hyperspace” to talk about it.
  6. Difficulty integrating/remembering the objects and scenes experienced (e.g. “they were too alien to recall”).
  7. Constant movement/acceleration and change of perspectives which are often described as “unfolding scenes and expanding patterns” (cf. the chrysanthemum, jitterbox).
  8. Continuous change of the scene’s context through escape routes: A door that leads to a labyrinth that leads to branching underground tunnels that lead to mirror rooms that lead to endless windows, and the one you take leading you to a temple with thirty seven gates which lead you to a kale salad world etc. (example).
  9. Crowding of scene beyond the limits of Euclidean space (users frequently wondering “How was I able to fit so much in my mind? I don’t see any space for my experience to fit in here!”)
  10. Reported similarity with fractals.
  11. Omnipresence of saddles making up the structural constraints of the hallucinated scenes. For example, one often hears about experiencing scenes saturated with: joints, twists, bifurcations, curved alleys, knots, and double helixes.
  12. Looking at self-similar objects (such as cauliflowers) can get you lost in what seems like endless space. (Note: beware of the potential side effects of looking at a cauliflower on DMT*).
  13. PSIS-like experiences where people seem to experience multiple alternative outcomes from each event at the same time (this may be the result of “hyperbolic branching” through time rather than space).
  14. Psychedelic replication pictures usually include features that can be interpreted as hyperbolic objects embedded in Euclidean 3D.
  15. People describe “incredibly advanced mechanisms” and “impossible objects” that cannot be represented in our usual reality (e.g. Terence Mckenna’s self-dribbling basketballs).
  16. At least one mathematician has stated that what one experiences on DMT cannot be translated into Euclidian geometry (unlike what one experiences on LSD).
  17. We received a series of systematic DMT trip-reports by a math enthusiast and experienced psychonaut who claims that the surfaces experienced on DMT are typically composed of hyperbolic tilings (which imply a negative curvature; cf. wallpaper groups).

This article goes beyond claiming a mere connection between DMT and hyperbolic geometry. We will be more specific by addressing the aspects of the experience that can be interpreted geometrically. To do so, let us now turn to a phenomenological description of the way DMT experiences usually unfold:

The Phenomenology of DMT experiences: The 6 Levels

In order to proceed we will give an account of a typical vaporized DMT experience. You can think of the following six sections as stages or levels of a DMT journey. Let me explain. The highest level you get to depends on the dose consumed, and in high doses one experiences all of the levels, one at a time, and in quick succession (i.e. on high doses these levels are perceived as the stages of the experience). If one takes just enough DMT to cross over to the highest level one reaches during the journey for only a brief moment, then that level will probably be described as “the peak of the experience”. If, on the other hand, one takes a dose that squarely falls within the milligram range for producing a given level, it will be felt as more of a “plateau”. Each level is sufficiently distinct from the others that people will rarely miss the transitions between them.

The six levels of a DMT experience are: Threshold, Chrysanthemum, Magic Eye, Waiting Room, Breakthrough, and Amnesia. Let us dive in!

(Note: The following description assumes that the self-experimenter is in good physical and mental health at the time of consuming the DMT. It is well known that negative states of consciousness can lead to incomprehensible hellscapes when “boosted” by DMT (please avoid DMT at all costs while you are drunk, depressed, angry, suicidal, irritable, etc.). The full geometry is best appreciated on a mentally and emotionally balanced set and settings.)

(1) Threshold

The very first alert of something unusual happening may take between 3 to 30 seconds after inhaling the DMT, depending on the dose consumed. Rather than a clear sensorial or cognitive change, the very first hint is a change in the apparent ambiance of one’s setting. You know how at times when you enter a temple, an art museum, a crowd of people, or even just a well decorated restaurant you can abstract an undefinable yet clearly present “vibe of the place”? There’s nothing overt or specific about it. The ambiance of a place is more of an overall gestalt than a localized feeling. An ambiance somehow encodes information about the social, ideological and aesthetic quality of the place or community you just crashed into, and it tells you at a glance which moods are socially acceptable and which ones are discouraged. The specific DMT vibe you feel on a given session can be one of a million different flavors. That said, whether you feel like you entered a circus or joined a religious ceremony, the very first hint of a DMT experience is nonetheless always (or almost always) accompanied with an overall feeling of significance. The feeling that something important is about to happen or is happening is made manifest by the vibe of the state. This vibe is usually present for at least the first 150 seconds or so of the journey. Interestingly, the change in ambiance is shorter-lived than the trip itself; it seems to go away before the visuals vanish quickly declining once the the peak is over.

Within seconds after the change in ambiance, one feels a sudden sharpening of all the senses. Some people describe this as “upgrading one’s experience to an HD version of it”. The level of detail in one’s experience is increased, yet the overall semantic content is still fairly intact. People say things like: “Reality around me seems more crisp” and “it’s like I’m really grasping my surroundings, you know? fully in tune with the smallest textures of the things around me.” Terence Mckenna described this state as follows: “The air appears to suddenly have been sucked out of the room because all the colors brighten visibly, as though some intervening medium has been removed.”

SONY DSC

On a schedule of repeated small doses (below 4 mg; preferably i.m.) one can stabilize this sharpening of the senses for arbitrarily long periods of time. I am a firm believer that this state (quite apart from the alien experiences on higher doses) can already be recruited for a variety of computational and aesthetic tasks that humans do in this day and age. In particular, the state itself seems to enable grasping complex ideas with many parameters without distorting them, which may be useful for learning mathematics at an accelerated pace. Likewise, the sate increases one’s awareness of one’s surroundings (possibly at the expense of consuming many calories). I find it hard to imagine that artists will not be able to use this state for anything valuable.

(2) The Chrysanthemum

If one ups the dose a little bit and lands somewhere in the range between 4 to 8 mg, one is likely to experience what Terrence McKenna called “the Chrysanthemum”. This usually manifests as a surface saturated with a sort of textured fabric composed of intricate symmetrical relationships, bright colors, shifting edges and shimmering pulsing superposition patterns of harmonic linear waves of many different frequencies.

Depending on the dose consumed one may experience either one or several semi-parallel channels. Whereas a threshold dose usually presents you with a single strong vibe (or ambiance), the Chrysanthemum level often has several competing vibes each bidding for your attention. Here are some examples of what the visual component of this state of consciousness may look like.

The visual component of the Chrysanthemum is often described as “the best screen saver ever“, and if you happen to experience it in a good mood you will almost certainly agree with that description, as it is usually extremelly harmonious, symmetric and beautiful in uncountable ways. No external input can possibly replicate the information density and intricate symmetry of this state; such state has to be endogenously generated as a a sort of harmonic attractor of your brain dynamics.

You can find many replications of Chrysanthemum-level DMT experiences on the internet, and I encourage you to examine their implicit symmetries (this replication is one of my all-times favorite).

In Algorithmic Reduction of Psychedelic States we posited that any one of the 17 wallpaper symmetry groups can be instantiated as the symmetries that govern psychedelic visuals. Unfortunately, unlike the generally slow evolution of usual psychedelic visuals, DMT’s vibrational frequency forces such visuals to evolve at a speed that makes it difficult for most people to spot the implicit symmetry elements that give rise to the overall mathematical structure underneath one’s experience. For this reason it has been difficult to verify that all 17 wallpaper groups are possible in DMT states. Fortunately we were recently able to confirm that this is in fact the case thanks to someone who trained himself to do just this. I.e. detecting symmetry elements in patterns at an outstanding speed.

An anonymous psychonaut (whom we will call researcher A) sent a series of trip report to Qualia Computing detailing the mathematical properties of psychedelic visuals under various substances and dose regimens. A is an experienced psychonaut and a math enthusiast who recently trained himself to recognize (and name) the mathematical properties of symmetrical patterns (such as in works of art or biological organisms). In particular, he has become fluent at naming the symmetries exhibited by psychedelic visuals. In the context of 2D visuals on surfaces, A confirms that the symmetrical textures that arise in psychedelic states can exhibit any one of the 17 wallpaper symmetry groups. Likewise, he has been able to confirm that every possible spherical symmetry group can also be instantiated in one’s mind on these states.

The images below show some examples of the visuals that A has experienced on 2C-B, LSD, 4-HO-MET and DMT (sources: top left, top middle, the rest were made with this service):

The Chrysanthemum level interacts with sensory input in an interesting way: the texture of anything one looks at quickly becomes saturated with nested 2-dimensional symmetry groups. If you took enough DMT to take you to this level and you keep your eyes open and look at a patterned surface (i.e. statistical texture), it will symmetrify beyond recognition. A explains that at this level DMT visuals share some qualities with those of, say, LSD, mescaline, and psilocin. Like other psychedelics, DMT’s Chrysanthemum level can instantiate any 2-dimensional symmetry, yet there are important differences from other psychedelics at this dose range. These include the consistent change in ambiance (already present in threshold doses), the complexity and consistency of the symmetrical relationships (much more dense and whole-experience-consistent than is usually possible with other psychedelics), and the speed (with a control-interruption frequency reaching up to 30 hertz, compared to 10-20 hertz for most psychedelics). Thus, people tend to point out that DMT visuals (at this level) are “faster, smaller, more detailed and more globally consistent” than on comparable levels of alteration from similar agents.

Now, if you take a dose that is a little higher (in the ballpark of 8 to 12 mg), the Chrysanthemum will start doing something new and interesting…

(3) The Magic Eye Level

A great way to understand the Magic Eye level of DMT effects is to think of the Chrysanthemum as the texture of an autostereogram (colloquially described as “Magic Eye” pictures). Our visual experience can be easily decomposed into two points-of-view (corresponding to the feed coming from each eye) that share information in order to solve the depth-map problem in vision. This is to map each visual qualia to a space with relative distances so (a) the input is explained and (b) you get recognizable every-day objects represented as implicit shapes beneath the depth-map. You can think of this process as a sort of hand-shake between bottom-up perception and top-down modeling.

In everyday conditions one solves the depth-map problem within a second of opening one’s eyes (minus minor details that are added as one looks around). But on DMT, the “low-level perceptions” looks like a breathing Chrysanthemum, which means that the top-down modeling has that “constantly shifting” stuff to play with. What to make of it? Anything you can think of.

There are three major components of variance on the DMT Magic Eye level:

  1. Texture (dependent on the Chrysanthemum’s evolution)
  2. World-sheet (non-occluduing 3D1T depth maps)
  3. Extremelly lowered information copying threshold.

The image on the left is a lobster, the one on the center is a cone and the one to the right contains furniture (a lamp, a chair and a table). Notice that what you see is a sort of depth-map which encodes shapes. We will call this depth-map together with the appearance of movement and acceleration represented in it, a world-sheet.

World-Sheets

The world-sheet encodes the “semantic content” of the scene and is capable of representing arbitrary situations (including information about what you are seeing, where you are, what the entities there are doing, what is happening, etc.).

It is common to experience scenes from usually mundane-looking places like ice-cream stores, play pens, household situations, furniture rooms, apparel, etc.. Likewise, one frequently sees entities in these places, but they rarely seem to mind you because their world is fairly self-contained. As if seeing through a window. People often report that the worlds they saw on a DMT trip were all “made of the same thing”. This can be interpreted as the texture becoming the surfaces of the world-sheet, so that the surfaces of the tables, chairs, ice-cream cones, the bodies of the people, and so on are all patterned with the same texture (just as in actual autostereograms). This texture is indeed the Chrysanthemum completely contorted to accommodate all the curvature of the scene.

Magic Eye level scenes often include 3D geometrical shapes like spheres, cones, cylinders, cubes, etc. The complexity of the scene is roughly dose-dependent. As one ups the highness (but still remaining within the Magic Eye level) complex translucid qualia crystals in three dimensions start to become a possibility.

Whatever phenomenal objects you experience on this level that lives for more than a millisecond needs to have effective strategies for surviving in an ecosystem of other objects adapted to that level. Given the extremelly lowered information copying threshold, whatever is good at making copies of itself will begin to tesselate, mutate and evolve, stealing as much of your attention as possible in the way. Cyclic transitions occupy one’s attention: objects quickly become scenes which quickly become gestalts from which a new texture evolves in which new objects are detected and so on ad infinitum.

katoite-hydrogarnet

A reports that at this dose range one can experience at least some of the 230 space groups as objects represented in the world-sheet. For example, A reports having stabilized a structure with a Pm-3m symmetry structure, not unlike the structure of ZIF-71-RHO. Visualizing such complex 3D symmetries, however, does seem to require previous training and high levels of mental concentration (i.e. in order to ensure that all the symmetry elements are indeed what they are supposed to be).

There is so much qualia laying around, though, at times not even your normal space can contain it all. Any regular or semi regular symmetrical structure you construct by focusing on it is prone to overflow if you focus too much on it. What does this mean? If you focus too much on, for example, the number 6, your mind might represent the various ways in which you can arrange six balls in a perfectly symmetrical way. Worlds made of hexagons and octahedrons interlocked in complex but symmetrical ways may begin to tesselate your experiential field. With every second you find more and more ways of representing the number six in interesting, satisfying, metaphorically-sound synesthetic ways (cf. Thinking in Numbers). Now, what happens if you try to represent the number seven in a symmetric way on the plane? Well, the problem is that you will have too many heptagons to fit in Euclidean space (cf. Too Many Triangles). Thus the resulting symmetrical patterns will seem to overflow the plane (which is often felt as a folding and fluid re-arrangement, and when there is no space left in a region it either expands space or it is felt as some sort of synesthetic tension or stress, like a sense of crackling under a lot of pressure).

In particular, A claims that in the lower ranges of the DMT Magic Eye level the texture of the Chrysanthemum tends to exhibit heptagonal and triheptagonal tilings (as shown in the picture above). A explains that at the critical point between the Chrysanthemum and the Magic Eye levels the intensity of the rate of symmetry detection of the Chrysanthemum cannot be contained to a 2D surface. Thus, the surface begins to fold, often in semi-symmetric ways. Every time one “recognizes” an object on this “folding Chrysanthemum” the extra curvature is passed on to this object. As the dose increases, one interprets more and more of this extra curvature and ends up shaping a complex and highly dynamic spatiotemporal depth map with hyperbolic folds. In the upper ranges of the Magic Eye level the world-sheet is so curved that the scenes one visualize are intricate and expansive, feeling at times like one is able to peer through one’s horizon in all directions and see oneself and one’s world from a distance. At some critical point one may feel like the space around one is folding into a huge dome where the walls are made of whatever texture + world-sheet combination happened to win the Darwinian selection pressures applied to the qualia patterns on the Magic Eye level. This concentrated hyperbolic synesthetic texture is what becomes the walls of the Waiting Room…

(4) Waiting Room

In the range of 12-25mg of DMT a likely final destination is the so-called Waiting Room. This experience is distinguished from the Magic Eye level in several ways: first, the world-sheet at this level breaks into several quasi-independent components, each evolving semi-autonomously. Second, one goes from “partial immersion” into “full immersion”. The transition between Magic Eye and Waiting Room often looks like “finding a very complex element in the scene and using it as a window into another dimension”. The total 2D surface curvature present (by adding up the curvature of all elements in the scene) is substantially higher than that of the Magic Eye level, and one can start to see actual 3D hyperbolic space. Perhaps a way of describing this transition is as follows: The curvature of the world-sheet gets to be so extreme that in order to accommodate it one’s entire multi-modal experiential field becomes involved, and a feeling of total and complete synchronization of all senses into a unified synesthetic experience is inescapable (often described as the “mmmMMMMMMM+++++!!!” whole-body tone people report). Thus the feeling of entering into an entirely new dimension. This explains what people mean when they say: “I experienced such an intense pressure that my soul could not be contained in my tiny body, and the intense pressure launched me into a bigger world”.

The images above, taken together, are meant as an impressionistic replication of what a Waiting Room experience may feel like. On the left you see the textured world-sheet curved in several ways resulting in an enclosed room with shimmering walls and an entity looking at a futuristic-looking contraption. The images on the right are meant to illustrate the ways in which the texture of the world-sheet evolves: you will find that the micro-structure of such texture is constantly unfolding in new symmetrical ways (bottom right), and propagating such changes throughout the entire surface at a striking speed (top right).

DMT Waiting Rooms contain entities that at times do interact directly with you. Their reality is perceived as a much more intense and intimate version of what human interaction normally is, but they do not give the impression of being telepathic. That said, their power is felt as if they could radiate it. One could say that this level of DMT places you in such an intimate, vulnerable and open state that interpreting the entities in a second-person social mode is almost inevitable. It is like interacting with someone you really know (or perhaps someone you really really want to know… or really really don’t want to know), except that the whole world is made of those feelings and some entities inhabit that world.

Serious hard-core psychonauts tend to describe the Wating Room as a temporary stopgap. Indeed more poetry could ever be written about the Waiting Room states of consciousness than about most human activities, for its state-space is larger, more diverse and more hedonically loaded. But even so, it is important to realize that there are even weirder states. Serious psychonauts exploring the upper ranges of humanly-accessible high energy consciousness research may see Waiting Rooms as a stepping stones to the real deal…

(5) Breakthrough

If one manages to ingest around 20-30mg of DMT there is a decent chance that one will achieve a DMT breakthrough experience (some sources place the dosage as high as 40mg). There is no agreed-upon definition for a “DMT breakthrough”, but most experienced users confirm that there is a qualitative change in the structure and feel of one’s experience on such high doses. Based on A’s observations we postulate that DMT breakthroughs are the result of a world-sheet with a curvature so extreme that topological bifurcations start to happen uncontrollably. In other words, the very topology of one’s world-sheet is forced to change in order to accommodate all of the intense curvature.

The geometry of space you experience may suddenly go from a simply-connected space into something else. What does this mean? Suddenly one may feel like space itself is twisting and reconnecting to itself in complex (and often confusing) ways. One may find that given any two points on this “alien world” there may be loops between them. This has drastic effects on one’s every representation (including, of course, the self-other divide). The particular feeling that comes with this may explain the presence of PSIS-like experiences induced by DMT and high dose LSD (cf. LSD and Quantum Measurements). Since the topological bifurcations are happening on a 3D1T world-sheet, this may look like “multiple things happening at once” or “objects taking multiple non-overlapping paths at once in order to get from one place into another”. The entities at this level feel transpersonal: due to the extreme curvature it is hard to distinguish between the information you ascribe to your self-model and the information you ascribe to others. Thus one is all over the place, in a literal topological sense.

While on the Waiting Room one can stabilize the context where the experience seems to be taking place, on a DMT breakthrough state one invariably “moves across vast regions, galaxies, universes, realities, etc.” in a constant uncontrollable way. Why is this? This may be related to whether one can contain the curvature of the objects one attends to. If the curvature is uncontrollable, it will “pass on to the walls” and result in constant “context switches”. In fact, such a large fraction of 3D space is perceived as hyperbolic in one way or another, that one seems to have access to vast regions of reality at the same time. Thus a sense of radical openness is often experienced.

(6) Amnesia Level

Unlike 5-MeO-DMT, “normal DMT” experiences are not typically so mind-warping that they dissolve one’s self-model completely. On the contrary, many people report DMT as having “surprisingly little effect on one’s sense of self except at very high doses” relative to the overall intensity of the alteration. Thus, DMT usually does not produce amnesia due to ego death directly. Rather, the amnesic properties of DMT at high doses can be blamed on the difficulty of instantiating the necessary geometry to make sense of what was experienced. In the case of doses above “breakthrough experiences” there is a chance that the user will not be able to recall anything about the most intense periods of the journey. Unfortunately, we are not likely to learn much from these states (that is, until we live in a community of people who can access other phenomenal geometries in a controlled fashion).

Recalling the Immemorial

We postulate that the difficulty people have remembering the phenomenal quality of a DMT experience is in part the result of not being able to access the geometry required to accurately relive their hallucinations. The few and far apart elements of the experience that people do somehow manage to remember, we posit, are those that happen to be (relatively) easy to embed in 3D Euclidean space. Thus, we predict that what people do manage to “bring back” from hyperspace will be biased towards those things that can be represented in R3.

This explains why people remember experiencing intensely saddled scenes (e.g. fractals, tunnels, kale worlds, recursive processes, and so on). Unfortunately most information-rich and interesting (irreducible, prime) phenomenal objects one experiences on DMT are by their very nature impossible to embed in our normal experiential geometry. This problem reveals an intrinsic limitation that comes from living in a community of intelligences (i.e. contemporary humans) who are constrained in the range of state-spaces of consciousness that they can access. This realization calls for a new epistemological paradigm, one that incorporate state-specific representations into a globally accessible database of states of consciousness, together with the network that emerges from their mutual (in)intelligibility.

DMT Objects

The increased curvature of one’s world-sheet can manifest in endless ways. In some important ways, the state-space of possible scenes that you can experience on DMT is much bigger than what you can experience on normal states of consciousness. Strictly speaking, you can represent more scenes on DMT states than in most other states because the overall amount qualia available is much larger. Of course the very dynamics of these experiences constrains what can be experienced, so there are still many things inaccessible on DMT. For instance, it may be impossible to experience a perfectly uniform blue screen (since the Chrysanthemum texture is saturated with edges, surfaces and symmetrical patterns). Likewise, scenes that are too irregular may be impossible to stabilize given the omnipresent symmetry enhancement found in the state.

What are the nature of the objects and entities one experiences on DMT? Magic Eye level experiences tend to include objects that are usually found in our everyday life. It is at the DMT waiting room level and above that the “truly impossible objects” begin to emerge. In particular, all of these objects are often curved in extreme ways. They condense within them complex networks of interlocking structures sustaining an overall superlative curvature. Here are some example objects that one can experience on Waiting Room and Breakthrough level experiences:

Notice that all of these images have many saddles everywhere. Ultimately, the range of objects one can experience on such states includes many other features that are impossible to represent in R3. The objects that people do manage to bring back and recall later on, are precisely those that can be embedded in R3. Thus you often see extremelly contorted wrapped-up objects. The most interesting ones (such as quasi-regular H3 tilings or irreducible objects) are next-to-impossible to bring back in any meaningful way, for now at least.

DMT Space Expansion

The expansion of space responsible for the increased curvature happens anywhere you direct your attention (including the objects you see). Here you can see what it may look like to stare at a DMT object: This is called the “jitterbox” mechanism.

DMT entities

DMT entities come in many forms, and their overall quality is extremelly dose-dependent. Rather than describing any specific manifestation we will instead briefly characterize the rough properties of the entities experienced based on the level reached.

  1. Threshold: Usually the ambiance change has a social feel to is. More similar to entering a room of people of an alien culture, than entering an empty cave or a warm pool on your own. In this sense the very beginning of a DMT experience may already frame the experience in social terms and facilitate the expectation of meeting entities.
  2. Chrysanthemum: One can feel perhaps the subtle presence of entities, but they are often interpreted as “feeling connected” to one’s friends, relatives and acquaintances. The feeling does not manifest in any clear spatial way, though. Other than that, this state is apersonal in the sense that one does not see any entity directly.
  3. Magic Eye: Here the entities can be roughly described as having an impersonal relationship with you. They are just there, hanging out on their own, often engrossed with whatever activities your world-sheet is capable of representing for them.
  4. Waiting Room: At this level entities start becoming able to interact with you. They feel like autonomous beings wrapped in mystery. Their intentions, what they know, and their emotional states can be guessed from their behavior, but they are not immediatly obvious.
  5. Breakthrough: At this level the entities one meets seem to have what we might call a transpersonal relationship with you. They share their own internal states (emotions, knowledge, wishes, etc) with you. It feels like they can communicate telepathically and “see through” you. One cannot hide one’s “private” mental contents from them at this level.
  6. Amnesia: One cannot remember, of course, exactly what happens here. But if trip reports are any indication, this level is reminiscent of highly “mystical” states in which one’s implicit beliefs about Personal Identity are obliterated and replaced by the feeling of becoming an all-encompassing entity. “Union with God” and “Samadhi” are terms that describe the subjective feeling of self in this state. In other words, at this level it is impossible to distinguish between oneself and other entities, for all is represented as one. (Beware of never trying to go here if you feel bad at the time since negative hedonic tone can be amplified just as much as a good feeling such as Samadhi).

Modeling the Hyperbolic Geometry of DMT

How can we explain the drastic geometric changes of phenomenal space on DMT? As mentioned earlier, we will discuss three (non-mutually exclusive) hypothesis. These hypothesis work at the level of an algorithmic reduction, which means that we will go deeper than just describing information processing and phenomenology. We will stop short of addressing the implementation level of abstraction. It is worth pointing out that describing the ways in which DMT experiences are hyperbolic is in itself an algorithmic reduction. What we are about to do is to develop a more granular algorithmic reduction in which we try to explain why hyperbolic geometry emerges on DMT states by postulating underlying processes. Here are the three reductions:

(1) Control Interruption + Symmetry detection = Change in Metric

Recall that on a previous article we algorithmically reduced general psychedelic states. The building blocks of that reduction were:

  1. Control Interruption (which amounts to a “longer half-life for all qualia”)
  2. Drifting (“breathing walls, eyes moving from their normal place, waving sensations”)
  3. Enhanced Pattern Recognition (pareidolia, cf. Getting Closed to Digital LSD)
  4. Lowered Symmetry Detection Threshold (quasi-symmetric patterns tend to “lock into” perfectly symmetrical structures)

Using this framework one can argue that DMT makes space more hyperbolic in the following way: in high amounts the synergistic effect of control interruption together with extremelly lowered symmetry detection thresholds experienced in quick succession makes the subjective distance between the points in the phenomenal objects in the scene evolve a hyperbolic metric. How would this happen? The key thing to realize is that in this model the usual quasi-Euclidean space we experience is an emergent effect of an equilibrium between these two forces. Even in normal circumstances our world-sheet is continuously regenerated; the rate at which symmetrical relationships in the scene are detected is balanced by the rate at which these subjective measurements are forgotten. This usually results in an emergent Euclidean geometry. On DMT the rate of symmetry detection increases while the rate of “forgetting” (inhibiting control) decreases. Attention points out more relationships in quick succession and this creates a network of measured subjective distances that cannot be embedded in Euclidean 3D space. Thus there is an overflow of symmetries. We are currently working on a precise mathematical model of this process in order to reconstruct a hyperbolic metric out of these two parameters. In this model, control interruption is interpreted as a change in the decay for subjective measurements of distance in one’s mind, whereas the lowered symmetry detection threshold is interpreted as a change in the probability of measuring the distance between any two given points as a function of the network of distances already measured.

The curvature increase is most salient where there is already a lot of measurements made, since highly-measured regions focus attention and attention drives symmetry detection. Thus, focusing on any surface will make the surface itself hyperbolic (rather than the 3D space, since measurements are mostly concentrated on the surface). On the other hand, if the curvature is too high to keep on a 2D surface, it will “jump” to 3D or even 3D1T (i.e. branching out the temporal component of one’s experience). The result is that the total curvature of one’s 3D1T world-sheet increases on DMT in a dose-dependent way.

Different doses lead to different states of curvature homeostasis. Each part of the worldsheet has constantly-morphing shapes and sudden curvature changes, but the total curvature is nonetheless more or less preserved on a given dose. It is not easy to get rid of excess curvature. Rather, whenever one tries to reduce the curvature in one part of the scene one is simply pushing it elsewhere. Even when one manages to push most of the curvature out of a given modality (e.g. vision) it is likely to quickly return in another modality (e.g. kinesthetic or auditory landscape) since attention never ceases on a DMT trip. Such apparent dose-dependent global curving of the world-sheet (and its jump from one modality into another) constrains the shape of the objects one can represent on the state (thus leading to alien-looking highly-curved objects similar to the ones shown above).

(2) Dynamic System Account: Energy Sources, Sinks and Invariants

Energy Invariants

Let us define a notion of energy in consciousness so that we can formalize the way experiences warps and transforms on DMT. Assume that one needs “energy” in order to instantiate a given experience (really, this is just an implicit invariant and we could use a different name). Each feature of a given experience needs a certain amount of energy, which roughly corresponds to a weighted sum of the intensity and the information content of an experience. For instance, the brightness of a point of colored light in one’s visual field is energy-dependent. Likewise, the information content in a texture, the number of represented symmetrical relationships, the speed by which an object moves (plus its acceleration), and even the curvature of one’s geometry. All of these features require energy to be instantiated.

Under normal circumstances the brain has many clever and (evolutionarily) appropriate ways of modulating the amount of energy present in different modules of one’s mind. That is, we have many programs that work as energy switches for different mental activities depending on the context. When we think, we have allocated a certain amount of energy to finding a shape/thought-form that satisfies a number of constraints. When it shape-shifting that energy in various ways and finding a solution, we either allocate more energy to it or perhaps give up. However, on DMT the energy cannot be switched off, and it can only pass from one modality into another. In other words, whereas in normal circumstances one uses strategically one’s ability to give energy limits to different tasks, on DMT one simply has constant high energy globally no matter what.

More formally, this model of DMT action says that DMT modifies the structure of one’s mind so that (1) energy freely passes from one form into another, and (2) energy floods the entire system. Let’s talk about energy sources and sinks.

Energy Sources and Sinks

In this algorithmic reduction DMT increases the amount of consciousness in one’s mind by virtue of impairing our normal energy sinks while increasing the throughput of its energy sources. This may frequently manifests as phenomenal spaces becoming hyperbolic in the mathematical-geometric sense of increasing its negative curvature as such curvature is one manifestation of higher levels of energy. Energy sinks are still present and they struggle to capture as much of the energy as possible. In particular, one energy sink is “recognition” of objects on the world-sheet.

This model postulates that attention functions as an energy source, whereas pattern recognition functions as an energy sink.

The Hamiltonian of a World-sheet

The total energy in one’s consciousness increases on DMT, and there is a constant flow between different ways for this energy to take form. That said, one can analyze piecewise the various components of one’s experience, specially if the network of energy exchange clusters well. In particular, we can postulate that world-sheets are fairly self-contained. Relative to other parts of the environment the mind is simulating, the world-sheet itself has a very high within-cluster energy exchange and a relatively low cross-cluster energy exchange. One’s world-sheet is very fluid, and little deformations propagate almost linearly throughout it. In a given dose plateau, if you add up the acceleration, the velocity, the curvature, and so on of every point in the world-sheet you will come up with a number that remains fairly constant over time. Thus studying the Hamiltonian of a world-sheet (i.e. the state-space given by a constant level of energy) can be very informative in describing both the information content and the experiential intensity of DMT experiences.

helicatenoid

You can deform a surface without changing its local curvature. (Source: Gauss’ “Remarkable Theorem” [seriously not my quotes]). Thus on a DMT trip plateau there is still a lot of room for transformations of the world-sheet into different shapes with similar curvature.

Under normal circumstances the curvature of one’s world-sheet is, as far as I can tell, arousal-dependent. Have you noticed how when you feel tired you are more likely to defocus your visual experience? You are tired late at night and you are trying to watch a movie, but bringing the scene in focus is too much of an effort so you defocus for a little bit (still listening to the dialogue). What did you do that for? In the framework here proposed, you did that to diminish the energy it takes you to sustain a curved world-sheet with a lot of information. Doing so may be aesthetically pleasing and rewarding when fully awake or excited, but when tired the returns on doing the focusing are not great given how much effort it needs and the fact that the dialogue is more essential for the plot anyway.

It takes effort and wakefulness to focus on a complex scene with many intricate details. (Reading and trying to comprehend this essay may itself require significant conscious energy expenditure). For this reason we might say that DMT is an exceedingly effective arouser of consciousness.

Bayesian Energy Sinks

One essential property of our minds is that our level of mental arousal decreases when we interpret our experience as “expected”. People who can enjoy their own minds do so, in part, by finding unexpected ways of understanding expected things. In the presence of new information that one cannot easily integrate, however, one’s level of energy is adjusted upwards so that we try out a variety of different models quickly and try to sort out a model that does make the new information expected (though perhaps integrating new assumptions or adding content in other ways). When we cannot manage to generate a mental model that works out a likely model of what we are experiencing we tend to remain in an over-active state.

This general principle applies to the world-sheet. One of the predominant ways in which a world-sheet reduces its energy (locally) is by morphing into something you can recognize or interpret. Thus the world-sheet in some way keeps on producing objects, at first familiar, but in higher energies the whole process can seem desperate or hopeless: one can only recognize things with a stretch of the imagination. Since humans in general lack much experience with hyperbolic geometry, we usually don’t manage to imagine objects that are symmetric on their own native geometry. But when we do, and we fill them up with resonant light-mind-energy, then BAM! New harmonics of consciousness! New varieties of bliss! Music of the angels! OMG! Laughter till infinity and more- shared across the galaxy- in a hyperbolic transpersonal delight! It’s like LSD and N2O! Wow!

Forgive me, it is my first day. Let’s carry on. As one does not know any object that the world-sheet can reasonably be able to generate in high doses, and the world-sheet has so much energy on its own, energy can seem to spiral out of control. This explains in part the non-linear relationship between experienced intensity and DMT dose.

Like all aspects of one’s consciousness, the negative curvature of phenomenal space tends to decay over time (possibly through inhibition by the cortex). In this case, the feeling is one of “smoothing out the curves” and embedding the phenomenal objects in 3D euclidean space. However, this is opposed by the effect that attention and (degrees of) awareness have on our phenomenal sheet, which is to increase its negative curvature. On DMT, anything that attention focuses on will begin branching, copying itself and multiplying, a process that quickly saturates the scene to the point of filling more spatial relationships than would fit in Euclidean 3D. The rate at which this happens is dose-dependent. The higher the dose, the less inhibiting control there is and the more intense the “folding” property of attention will be. Thus, for different dosages one reaches different homeostatic levels of overall curvature in one’s phenomenal space. Since attention does not stop at any point during a DMT trip (it keeps being bright and intense all throughout) there isn’t really any rest period to sit back and see the curvature get smoothed out on its own. Everything one thinks about, perceives or imagines branches out and bifurcate at a high speed.

Every moment during the experience is very hard to “grasp” because the way one normally does that in usual circumstances is by focusing attention on it and shaping one’s world-sheet to account for the input. But here that very attention makes the world-sheet wobble, warp and expand beyond recognition. Thus one might say that during a solid DMT experience one never sees the same thing twice, as the experience continues to evolve. That is, of course, as long as you do not stumble upon (or deliberatively create) stable phenomenal objects whose structure can survive the warping effect of attention.

(3) Hyperbolic Micro-structure of Consciousness

Subjectively, A says, negative curvature is associated with more energy. Perhaps this curvature happens at a very low level? An example to light up the imagination is using heat to fold a sheet of metal (thanks to thermal expansion). Whatever your attention focuses on seems to get heated up (in some sense) and expand as a result. The folding patterns themselves seem to store potential energy. Left on their own, this extra energy stored as negative curvature usually dissipates, but on DMT this process is lowered (while the effect of increasing the energy is heightened). Could this be the result of some very very fine-level micro-experiential change that gradually propagates upwards? With the help of our normal mental processes the change in the micro-structure may propagate all the way into seemingly hyperbolic 2D and 3D surfaces.

Perhaps the most important difference between DMT in high doses and other psychedelics is that the micro-structure of consciousness drifts in such a way that tiny Droste effects bubble up into large Möbius transforms.

As noted already, these three algorithmic reductions are not incompatible. We just present them here due to their apparent explanatory power. A lot more theoretical work will be needed to make them quantitative and precise, but we are optimistic. The aim is now to develop an experimental framework to distinguish between the predictions that each candidate algorithmic reduction makes (including many not presented here). This is a work in progress.

Generalizing hyperbolization to non-spatial experiential fields

In the case of experiential fields such as body feelings, smells and concepts, the “hyperbolization” takes different forms depending on the algorithmic reduction you use. I prefer the very general interpretation that one experiences hyperbolic information geometry rather than just hyperbolic space. In other words, when we talk about body feelings and so on, on a psychedelic one organizes such information in a hyperbolic relational graph, which also exhibits a negative curvature relative to its normal geometry. Arguing in favor of this interpretation would take another article, so we will leave that for another time.

Getting a handle on the DMT state

Gluing a 1-handle is easy on a 2-sphere. Tongue in cheek, sticking a little doughnut on a big ball allows you to grab the sphere and control it in some way. But how do you get a handle on hyperbolic space? The answer is to build hyperbolic manifolds at the core of one’s being, by imagining knots very intensely. The higher one is, the more complex the knot one can imagine in detail. Having practiced visualizations of this sort while sober certainly helps. If you imagine the knot with enough detail, you can then stress the environment surrounding it to represent a warped hyperbolic space. This way you give life to the complement of the knot (which is almost always hyperbolic!). We postulate that it is possible to study in detail the relationship between the knots imagined, and the properties of the experiential worlds that result from their inversion (i.e. thinking about the geometry of the space surrounding the knot rather than the knot itself). A reports that different hyperbolic spaces generated this way (i.e. imagining knots on tryptamines) have different levels of energy, and have unique resonant properties. Different kinds of music feel better in different kinds of hyperbolic manifolds. It takes more energy to “light up” a hyperbolic space like that, mostly due to its openness. This is why using small doses of 2C-B can be helpful to create a positive backbone to the experience (providing the necessary warmth to light up the hyperbolic space). Admittedly MDMA tends to work best, but its use is unadvisable for reasons we will not get into (related to the hedonic treadmill). A healthy combination that both enables the visualization of the hyperbolic spaces in a vivid way and also lights them up with positive hedonic tone healthily and reliably has yet to be found.

Relatedly… Get a handle on your DMT trip by creating a stabilizing 4D hyperbolic manifold in four easy steps:

Unifying Your Space

God, the divine, open individualism, the number one, an abstract notion of self, or the thought of existence itself are all thoughts that work as great “unifiers” of large areas of phenomenal space. Indeed these concepts can allow a person to connect the edges of the hyperbolic space and create a pocket of one’s experience that does not seem to have a boundary yet is extremelly open. This may be a reason why such ideas are very common in high levels of psychedelia. In a sense, depending on the mind, they have at times the highest recruiting power for your multi-threaded attention.

Applications to Qualia Computing and Closing Thoughts

Beyond mere designer synesthesia, the future of consciousness research contains the possibility of exploring alternative geometries for the layout of our experiences. One’s overall level of energy, its manifestation, the allowed invariants, the logic gates, the differences in resonance, the granularity of the patterns, and so on, are all parameters that we will get to change in our minds to see what happens (in controlled and healthy ways, of course). The exploration of the state-space of consciousness is sure to lead to a combinatorial explosion. Even with good post-theoretical quantitative algorithmic reductions, it is likely that qualia computing scientists will still find an unfathomable number of distinct “prime” permutations. For some applications it may be more useful to use special kinds of hyperbolic spaces (like the compliment of certain class of knot), but for others it may suffice to be a little sphere. Who knows. In the end, if a valence economy ends up dominating the world, then the value of hyperbolic phenomenal spaces will be proportional to the level of wellbeing and bliss that can be felt in them. Which space in which resonant mode generates the highest level of bliss? This is an empirical question with far-reaching economic implications.

Mathematics post-hyperbolic consciousness

I predict that some time in the next century or so many of the breakthroughs in mathematics will take place in consciousness research centers. The ability to utilize arbitrary combinations of qualia with programable geometry and information content (in addition to our whole range of pre-existing cognitive skills) will allow people to have new semantic primitives related to mathematical structures and qualia systems currently unfathomable to us. In the end, studying the mathematics of consciousness and valence is perhaps the ultimate effective altruist endeavor in a world filled with suffering, since reverse-engineering valence would simplify paradise engineering… But even in a post-scarcity world, consciousness research will also probably be the ultimate past time given the endless new discoveries awaiting to be found in the state-space of consciousness.


*On the unexpected side effects of staring at a cauliflower on DMT: You can get lost in the hyperbolic reality of the (apparent) life force that spirals in a scale-free fractal fashion throughout the plant. The spirals may feel like magnetic vortexes that take advantage of your state to attract your attention. The cauliflower may pull you into its own world of interconnected fractals, and as soon as you start to trust it, it begins trying to recruit you for the cauliflower cause. The cauliflower may scare you into not eating it, and make you feel guilty about frying it. You may freak out a little, but when you come down you convince yourself that it was all just a hallucination. That said, you secretly worry it was for real. You may never choose to abstain from eating cauliflowers, but you will probably drop the knife when cooking it. You will break it apart with your own hands in the way you think minimizes its pain. You sometimes wonder whether it experiences agony as it is slowly cooked in the pan, and you drink alcohol to forget. Damn, don’t stare at a cauliflower while high on DMT if you ever intend to eat one again.

P.S. Note on Originality: The only mention I have been able to find that explicitly connects hyperbolic geometry in a literal sense with DMT (rather than just metaphorical talk of “hyperspace”) is a 2014 post in the Psychonaut subredit. To my knowledge, no one has yet elaborated to any substantial degree on this interesting connection. That said, I’m convinced that during the days that follow a strong trip, psychedelic self-experimenters may frequently wonder about the geometry of the places they explored. Yet they usually lack any conceptual framework to justify their intuitions or even verbalize them, so they quickly forget about them.

P.S.S. Example Self-Dribbling Basketball:

tumblr_mzwuhkg05b1svg5dto1_400

Self-dribbling basketball

To the right you can see what a “self-dribbling basketball” looks like. The more you try to “grasp” what it is, the more curved it gets. That’s because you are adding energy with you attention and you do not have enough recognition ability in this space to lower its energy and reduce the curvature to stabilize it. The curvature is so extreme at times that it produces constant “context switches”. This is the result of excess curvature being pushed towards the edge of your experience and turning into walls and corridors.

P.S.S.S.: Example on world-sheet bending:

Below you can find two gifs that illustrate the behavior of a world-sheet on a 5mg vs. 20mg dose. The speed at which you are adding curvature to it increases so much that the shapes and objects keep shifting to accommodate it all.

(Source of super-trippy symmetric hyperbolic manifold representations: http://newearthlovelight.tumblr.com/post/70053311720)