The Banality of Evil

In response to the Quora question “I feel like a lot of evil actions in the world have supporters who justify them (like Nazis). Can you come up with some convincing ways in which some of the most evil actions in the world could be justified?David Pearce writes:


Tout comprendre, c’est tout pardonner.”
(Leo Tolstoy, War and Peace)

Despite everything, I believe that people are really good at heart.
(Anne Frank)

The risk of devising justifications of the worst forms of human behaviour is there are people gullible enough to believe them. It’s not as though anti-Semitism died with the Third Reich. Even offering dispassionate causal explanation can sometimes be harmful. So devil’s advocacy is an intellectual exercise to be used sparingly.

That said, the historical record suggests that human societies don’t collectively set out to do evil. Rather, primitive human emotions get entangled with factually mistaken beliefs and ill-conceived metaphysics with ethically catastrophic consequences. Thus the Nazis seriously believed in the existence of an international Jewish conspiracy against the noble Aryan race. Hitler, so shrewd in many respects, credulously swallowed The Protocols of the Elders of Zion. And as his last testament disclosed, obliquely, Hitler believed that the gas chambers were a “more humane means” than the terrible fate befalling the German Volk. Many Nazis (HimmlerHössStangl, and maybe even Eichmann) believed that they were acting from a sense of duty – a great burden stoically borne. And such lessons can be generalised across history. If you believed, like the Inquisition, that torturing heretics was the only way to save their souls from eternal damnation in Hell, would you have the moral courage to do likewise? If you believed that the world would be destroyed by the gods unless you practised mass human sacrifice, would you participate? [No, in my case, albeit for unorthodox reasons.]

In a secular context today, there exist upstanding citizens who would like future civilisation to run “ancestor simulations”. Ancestor simulations would create inconceivably more suffering than any crime perpetrated by the worst sadist or deluded ideologue in history – at least if the computational-functional theory of consciousness assumed by their proponents is correct. If I were to pitch a message to life-lovers aimed at justifying such a monstrous project, as you request, then I guess I’d spin some yarn about how marvellous it would be to recreate past wonders and see grandpa again.
And so forth.

What about the actions of individuals, as distinct from whole societies? Not all depraved human behaviour stems from false metaphysics or confused ideology. The grosser forms of human unpleasantness often stem just from our unreflectively acting out baser appetites (cfHamiltonian spite). Consider the neuroscience of perception. Sentient beings don’t collectively perceive a shared public world. Each of us runs an egocentric world-simulation populated by zombies (sic). We each inhabit warped virtual worlds centered on a different body-image, situated within a vast reality whose existence can be theoretically inferred. Or so science says. Most people are still perceptual naïve realists. They aren’t metaphysicians, or moral philosophers, or students of the neuroscience of perception. Understandably, most people trust the evidence of their own eyes and the wisdom of their innermost feelings, over abstract theory. What “feels right” is shaped by natural selection. And what “feels right” within one’s egocentric virtual world is often callous and sometimes atrocious. Natural selection is amoral. We are all slaves to the pleasure-pain axis, however heavy the layers of disguise. Thanks to evolution, our emotions are “encephalised” in grotesque ways. Even the most ghastly behaviour can be made to seem natural –like Darwinian life itself.

Are there some forms of human behaviour so appalling that I’d find it hard to play devil’s advocate in their mitigation – even as an intellectual exercise?

Well, perhaps consider, say, the most reviled hate-figures in our society – even more reviled than murderers or terrorists. Most sexually active paedophiles don’t set out to harm children: quite the opposite, harm is typically just the tragic by-product of a sexual orientation they didn’t choose. Posthumans may reckon that all Darwinian relationships are toxic. Of course, not all monstrous human behavior stems from wellsprings as deep as sexual orientation. Thus humans aren’t obligate carnivores. Most (though not all) contemporary meat eaters, if pressed, will acknowledge in the abstract that a pig is as sentient and sapient as a prelinguistic human toddler. And no contemporary meat eaters seriously believe that their victims have committed a crime (cfAnimal trial – Wikipedia). Yet if questioned why they cause such terrible suffering to the innocent, and why they pay for a hamburger rather than a veggieburger, a meat eater will come up with perhaps the lamest justification for human depravity ever invented:

“But I like the taste!”

Such is the banality of evil.

Person-moment affecting views

by Katja Grace (source)

[Epistemic status: sloppy thoughts not informed by the literature. Hoping actual population ethicists might show up and correct me or point me to whoever has already thought about something like this better.]

Person-affecting views say that when you are summing up the value in different possible worlds, you should ignore people who only exist in one of those worlds. This is based on something like the following intuitions:

  1. World A can only be better than world B insofar as it is better for someone.
  2. World A can’t be better than world B for Alice, if Alice exists in world A but not world B.

The further-fact view says that after learning all physical facts about Alice and Alice’—such as whether Alice’ was the physical result of Alice waiting for five seconds, or is a brain upload of Alice, or is what came out of a replicating machine on Mars after Alice walked in on Earth, or remembers being Alice—there is still a further meaningful question of whether Alice and Alice’ are the same person.

I take the further-fact view to be wrong (or at least Derek Parfit does, and I think we agree the differences between Derek Parfit and I have been overstated). Thinking that the further-fact view is wrong seems to be a common position among intellectuals (e.g. 87% among philosophers).

If the further-fact view is wrong, then the what we have is a whole lot of different person-moments, with various relationships to one another, which for pragmatic reasons we like to group into clusters called ‘people’. There are different ways we could define the people, and no real answer to which definition is right. This works out pretty well in our world, but you can imagine other worlds (or futures of our world) where the clusters are much more ambiguous, and different definitions of ‘person’ make a big difference, or where the concept is not actually useful.

Person-affecting views seem to make pretty central use of the concept ‘person’. If we don’t accept the further-fact view, and do want to accept a person-affecting view, what would that mean? I can think of several options:

  1. How good different worlds are depends strongly on which definition of ‘person’ you choose (which person moments you choose to cluster together), but this is a somewhat arbitrary pragmatic choice
  2. There is some correct definition of ‘person’ for the purpose of ethics (i.e. there is some relation between person moments that makes different person moments in the future ethically relevant by virtue of having that connection to a present person moment)
  3. Different person-moments are more or less closely connected in ways, and a person-affecting view should actually have a sliding scale of importance for different person-moments

Before considering these options, I want to revisit the second reason for adopting a person-affecting view: If Alice exists in world A and not in world B, then Alice can’t be made better off by world A existing rather than world B. Whether this premise is true seems to depend on how ‘a world being better for Alice’ works. Some things we might measure would go one way, and some would go the other. For instance, we could imagine it being analogous to:

  1. Alice painting more paintings. If Alice painted three paintings in world A, and doesn’t exist in world B, I think most people would say that Alice painted more paintings in world A than in world B. And more clearly, that world A has more paintings than world B, even if we insist that a world can’t have more paintings without somebody in particular having painted more paintings. Relatedly, there are many things people do where the sentence ‘If Alice didn’t exist, she wouldn’t have X’.
  2. Alice having painted more paintings per year. If Alice painted one painting every thirty years in world A, and didn’t exist in world B, in world B the number of paintings per year is undefined, and so incomparable to ‘one per thirty years’.

Suppose that person-affecting view advocates are right, and the worth of one’s life is more like 2). You just can’t compare the worth of Alice’s life in two worlds where she only exists in one of them. Then can you compare person-moments? What if the same ‘person’ exists in two possible worlds, but consists of different person-moments?

Compare world A and world C, which both contain Alice, but in world C Alice makes different choices as a teenager, and becomes a fighter pilot instead of a computer scientist. It turns out that she is not well suited to it, and finds piloting pretty unsatisfying. If Alice_t1A is different from Alice_t1C, can we say that world A is better than world C, in virtue of Alice’s experiences? Each relevant person-moment only exists in one of the worlds, so how can they benefit?

I see several possible responses:

  1. No we can’t. We should have person-moment affecting views.
  2. Things can’t be better or worse for person-moments, only for entire people, holistically across their lives, so the question is meaningless. (Or relatedly, how good a thing is for a person is not a function of how good it is for their person-moments, and it is how good it is for the person that matters).
  3. Yes, there is some difference between people and person moments, which means that person-moments can benefit without existing in worlds that they are benefitting relative to, but people cannot.

The second possibility seems to involve accepting the second view above: that there is some correct definition of ‘person’ that is larger than a person moment, and fundamental to ethics – something like the further-fact view. This sounds kind of bad to me. And the third view doesn’t seem very tempting without some idea of an actual difference between persons and person-moments.

So maybe the person-moment affecting view looks most promising. Let us review what it would have to look like. For one thing, the only comparable person moments are the ones that are the same. And since they are the same, there is no point bringing about one instead of the other. So there is never reason to bring about a person-moment for its own benefit. Which sounds like it might really limit the things that are worth intentionally doing. Isn’t making myself happy in three seconds just bringing about a happy person moment rather than a different sad person moment?

Is everything just equally good on this view? I don’t think so, as long as you are something like a preference utilitarian: person-moments can have preferences over other person-moments. Suppose that Alice_t0A and Alice_t0C are the same, and Alice_t1A and Alice_t1C are different. And suppose that Alice_t0 wants Alice_t1 to be a computer scientist. Then world A is better than world C for Alice_t0, and so better overall. That is, person-moments can benefit from things, as long as they don’t know at the time that they have benefited.

I think an interesting  feature of this view is that all value seems to come from meddling preferences. It is never directly good that there is joy in the world for instance, it is just good because somebody wants somebody else to experience joy, and that desire was satisfied. If they had instead wished for a future person-moment to be tortured, and this was granted, then this world would apparently be just as good.

So, things that are never directly valuable in this world:

  • Joy
  • Someone getting what they want and also knowing about it
  • Anything that isn’t a meddling preference

On the upside, since person-moments often care about future person-moments within the same person, we do perhaps get back to something closer to the original person-affecting view. There is often reason to bring about or benefit a person moment for the benefit of previous person moments in the history of the same person, who for instance wants to ‘live a long and happy life’. My guess after thinking about this very briefly is that in practice it would end up looking like the ‘moderate’ person-affecting views, in which people who currently exist get more weight than people who will be brought into existence, but not infinitely more weight. People who exist now mostly want to continue existing, and to have good lives in the future, and they care less, but some, about different people in the future.

So, if you want to accept a person-affecting view and not a further-fact view, the options seem to me to be something like these:

  1. Person-moments can benefit without having an otherworldly counterpart, even though people cannot. Which is to say, only person-moments that are part of the same ‘person’ in different worlds can benefit from their existence. ‘Person’ here is either an arbitrary pragmatic definition choice, or some more fundamental ethically relevant version of the concept that we could perhaps discover.
  2. Benefits accrue to persons, not person-moments. In particular, benefits to persons are not a function of the benefits to their constituent person-moments. Where ‘person’ is again either a somewhat arbitrary choice of definition, or a more fundamental concept.
  3. A sliding scale of ethical relevance of different person-moments, based on how narrow a definition of ‘person’ unites them with any currently existing person-moments. Along with some story about why, given that you can apparently compare all of them, you are still weighting some less, on grounds that they are incomparable.
  4. Person-moment affecting views

None of these sound very good to me, but nor do person-affecting views in general, so maybe I’m the wrong audience. I had thought person-moment affecting views were almost a reductio, but a close friend says he thought they were the obvious reasonable view, so I am curious to hear others’ takes.



An interesting thing to point out here is that what Katja describes as the further-fact view is terminologically equivalent to what we here call Closed Individualism (cf. Ontological Qualia). This is the common-sense view that you start existing when you are born and stop existing when you die (which also has soul-based variants with possible pre-birth and post-death existence). This view is not very philosophically tenable because it presupposes that there is an enduring metaphysical ego distinct for every person. And yet, the vast majority of people still hold strongly to Closed Individualism. In some sense, in the article Katja tries to rescue the common-sense aspect of Closed Individualism in the context of ethics. That is, by trying to steel-man the common-sense notion that people (rather than moments of experience) are the relevant units for morality while also negating further-fact views, you provide reasons to keep using Closed Individualism as an intuition-pump in ethics (if only for pragmatic reasons). In general, I consider this kind of discussions to be a very fruitful endeavor as they approach ethics by touching upon the key parameters that matter fundamentally: identity, value, and counterfactuals.

As you may gather from pieces such as Wireheading Done Right and The Universal Plot, at Qualia Computing we tend to think the most coherent ethical system arises when we take as a premise that the relevant moral agents are “moments of experience”. Contra Person-affecting views, we don’t think it is meaningless to say that a given world is better than another one if not everyone in the first world is also in the second one. On the contrary – it really does not matter who lives in a given world. What matters is the raw subjective quality of the experiences in such worlds. If it is meaningless to ask “who is experiencing Alice’s experiences now?” once you know all the physical facts, then moral weight must be encoded in such physical facts alone. In turn, it could certainly happen then that the narrative aspect of an experience may turn out to be irrelevant for determining the intrinsic value of a given experience. People’s self-narratives may certainly have important instrumental uses, but at their core they don’t make it to the list of things that intrinsically matter (unlike, say, avoiding suffering).

A helpful philosophical move that we have found adds a lot of clarity here is to analyze the problem in terms of Open Individualism. That is, assume that we are all one consciousness and take it from there. If so, then the probability that you are a given person would be weighted by the amount of consciousness (or number of moments of experience, depending) that such person experiences throughout his or her life. You are everyone in this view, but you can only be each person one at a time from their own limited points of view. So there is a sensible way of weighting the importance of each person, and this is a function of the amount of time you spend being him or her (and normalize by the amount of consciousness that person experiences, in case that is variable across individuals).

If consciousness emerges victorious in its war against pure replicators, then it would make sense that the main theory of identity people would hold by default would be Open Individualism. After all, it is only Open Individualism that aligns individual incentives and the total wellbeing of all moments of experience throughout the universe.

That said, in principle, it could turn out that Open Individualism is not needed to maximize conscious value – that while it may be useful instrumentally to align the existing living intelligences towards a common consciousness-centric goal (e.g. eliminating suffering, building a harmonic society, etc.), in the long run we may find that ontological qualia (the aspect of our experience that we use to represent the nature of reality, including our beliefs about personal identity) has no intrinsic value. Why bother experiencing heaven in the form of a mixture of 95% bliss and 5% ‘a sense of knowing that we are all one’, if you can instead just experience 100% pure bliss?

At the ethical limit, anything that is not perfectly blissful might end up being thought of as a distraction from the cosmic telos of universal wellbeing.

Qualia Research Institute presentations at The Science of Consciousness 2018 (Tucson, AZ)

As promised, here are the presentations Michael Johnson and I gave in Tucson last week to represent the Qualia Research Institute.

Here is Michael’s presentation:

And here is my presentation:


On a related note:

  1. Ziff Davis PCMag published an interview with me in anticipation of the conference.
  2. An ally of QRI, Tomas Frymann, gave a wonderful presentation about Open Individualism titled “Consciousness as Interbeing: Identity on the Other Side of Self-Transcendence
  3. As a bonus, here is the philosophy of mind stand-up comedy sketch I performed at their Poetry Slam, which took place on Friday night (you should likewise check out their classic Zombie Blues).

What If God Were a Closed Individualist Presentist Hedonistic Utilitarian With an Information-Theoretic Identity of Indiscernibles Ontology?

Extract from “Unsong” (chapter 18):

There’s an old Jewish childrens’ song called Had Gadya. It starts:

A little goat, a little goat
My father bought for two silver coins,
A little goat, a little goat

Then came the cat that ate the goat
My father bought for two silver coins
A little goat, a little goat

Then came that dog that bit the cat…

And so on. A stick hits the dog, a fire burns the stick, water quenches the fire, an ox drinks the water, a butcher slaughters the ox, the Angel of Death takes the butcher, and finally God destroys the Angel of Death. Throughout all of these verses, it is emphasized that it is indeed a little goat, and the father did indeed buy it for two silver coins.

[…]

As far as I know, no one has previously linked this song to the Lurianic Kabbalah. So I will say it: the deepest meaning of Had Gadya is a description of how and why God created the world. As an encore, it also resolves the philosophical problem of evil.

The most prominent Biblical reference to a goat is the scapegoating ritual. Once a year, the High Priest of Israel would get rid of the sins of the Jewish people by mystically transferring all of them onto a goat, then yelling at the goat until it ran off somewhere, presumably taking all the sin with it.

The thing is, at that point the goat contained an entire nation-year worth of sin. That goat was super evil. As a result, many religious and mystical traditions have associated unholy forces with goats ever since, from the goat demon Baphomet to the classical rather goat-like appearance of Satan.

So the goat represents evil. I’ll go along with everyone else saying the father represents God here. So God buys evil with two silver coins. What’s up?

The most famous question in theology is “Why did God create a universe filled with so much that is evil?” The classical answers tend to be kind of weaselly, and center around something like free will or necessary principles or mysterious ways. Something along the lines of “Even though God’s omnipotent, creating a universe without evil just isn’t possible.”

But here we have God buying evil with two silver coins. Buying to me represents an intentional action. Let’s go further – buying represents a sacrifice. Buying is when you sacrifice something dear to you to get something you want even more. Evil isn’t something God couldn’t figure out how to avoid, it’s something He covets.

What did God sacrifice for the sake of evil? Two silver coins. We immediately notice the number “two”. Two is not typically associated with God. God is One. Two is right out. The kabbalists identify the worst demon, the nadir of all demons, as Thamiel, whose name means “duality in God”. Two is dissonance, divorce, division, dilemmas, distance, discrimination, diabolism.

This, then, was God’s sacrifice. In order to create evil, He took up duality.

“Why would God want to create evil? God is pure Good!”

Exactly. The creation of anything at all other than God requires evil. God is perfect. Everything else is imperfect. Imperfection contains evil by definition. Two scoops of evil is the first ingredient in the recipe for creating universes. Finitude is evil. Form is evil. Without evil all you have is God, who, as the kabbalists tell us, is pure Nothing. If you want something, evil is part of the deal.

Now count the number of creatures in the song. God, angel, butcher, ox, water, fire, stick, dog, cat, goat. Ten steps from God to goat. This is the same description of the ten sephirot we’ve found elsewhere, the ten levels by which God’s ineffability connects to the sinful material world without destroying it. This is not a coincidence because nothing is ever a coincidence. Had Gadya isn’t just a silly children’s song about the stages of advancement of the human soul, the appropriate rituals for celebrating Passover in the Temple, the ancient Sumerian pantheon, and the historical conquests of King Tiglath-Pileser III. It’s also a blueprint for the creation of the universe. Just like everything else.


(see also: ANSWER TO JOB)

Every Qualia Computing Article Ever

 The three main goals of Qualia Computing are to:

  1. Catalogue the entire state-space of consciousness
  2. Identify the computational properties of experience, and
  3. Reverse engineer valence (i.e. discover the function that maps formal descriptions of states of consciousness to values along the pleasure-pain axis)

Core Philosophy (2016)

2018

The Banality of Evil (quote)

Person-moment affecting view (quote)

Qualia Formalism in the Water Supply: Reflections on The Science of Consciousness 2018 (long)

Qualia Research Institute presentations at The Science of Consciousness 2018 (Tucson, AZ)

Modern Accounts of Psychedelic Action (quote)

From Point-of-View Fragmentation to Global Visual Coherence: Harmony, Symmetry, and Resonance on LSD (mostly quote/long)

What If God Were a Closed Individualist Presentist Hedonistic Utilitarian With an Information-Theoretic Identity of Indiscernibles Ontology? (quote)

Every Qualia Computing Article Ever

Qualia Computing Attending The Science of Consciousness 2018

Everything in a Nutshell (quote)

2017

Would Maximally Efficient Work Be Fun? (quote)

The Universal Plot: Part I – Consciousness vs. Pure Replicators (long)

No-Self vs. True Self (quote)

Qualia Manifesto (quote)

What Makes Tinnitus, Depression, and the Sound of the Bay Area Rapid Transit (BART) so Awful: Dissonance

Traps of the God Realm (quote)

Avoid Runaway Signaling in Effective Altruism (transcript)

Burning Man (long)

Mental Health as an EA Cause: Key Questions

24 Predictions for the Year 3000 by David Pearce (quote)

Why I think the Foundational Research Institute should rethink its approach (quote/long)

Quantifying Bliss: Talk Summary (long)

Connectome-Specific Harmonic Waves on LSD (transcript)

ELI5 “The Hyperbolic Geometry of DMT Experiences”

Qualia Computing at Consciousness Hacking (June 7th 2017)

Principia Qualia: Part II – Valence(quote)

The Penfield Mood Organ (quote)

The Most Important Philosophical Question

The Forces At Work (quote)

Psychedelic Science 2017: Take-aways, impressions, and what’s next (long)

How Every Fairy Tale Should End

Political Peacocks (quote)

OTC remedies for RLS (quote)

Their Scientific Significance is Hard to Overstate (quote)

Memetic Vaccine Against Interdimensional Aliens Infestation (quote)

Raising the Table Stakes for Successful Theories of Consciousness

Qualia Computing Attending the 2017 Psychedelic Science Conference

GHB vs. MDMA (quote)

Hedonium

2016

The Binding Problem (quote)

The Hyperbolic Geometry of DMT Experiences: Symmetries, Sheets, and Saddled Scenes (long)

Thinking in Numbers (quote)

Praise and Blame are Instrumental (quote)

The Tyranny of the Intentional Object

Schrödinger’s Neurons: David Pearce at the “2016 Science of Consciousness” conference in Tucson

Beyond Turing: A Solution to the Problem of Other Minds Using Mindmelding and Phenomenal Puzzles

Core Philosophy

David Pearce on the “Schrodinger’s Neurons Conjecture” (quote)

Samadhi (quote)

Panpsychism and Compositionality: A solution to the hard problem (quote)

LSD and Quantum Measurements: Can you see Schrödinger’s cat both dead and alive on acid? (long)

Empathetic Super-Intelligence

Wireheading Done Right: Stay Positive Without Going Insane (long)

Just the fate of our forward light-cone

Information-Sensitive Gradients of Bliss (quote)

A Single 3N-Dimensional Universe: Splitting vs. Decoherence (quote)

Algorithmic Reduction of Psychedelic States (long)

So Why Can’t My Boyfriend Communicate? (quote)

The Mating Mind

Psychedelic alignment cascades (quote)

36 Textures of Confusion

Work Religion (quote)

Qualia Computing in Tucson: The Magic Analogy

In Praise of Systematic Empathy

David Pearce on “Making Sentience Great” (quote)

Philosophy of Mind Diagrams

Ontological Runaway Scenario

Peaceful Qualia: The Manhattan Project of Consciousness (long)

Qualia Computing So Far

You are not a zombie (quote)

What’s the matter? It’s Schrödinger, Heisenberg and Dirac’s (quote)

The Biointelligence Explosion (quote)

A (Very) Unexpected Argument Against General Relativity As A Complete Account Of The Cosmos

Status Quo Bias

The Super-Shulgin Academy: A Singularity I Can Believe In (long)

The effect of background assumptions on psychedelic research

2015

An ethically disastrous cognitive dissonance…

Some Definitions (quote)

Who should know about suffering?

Ontological Qualia: The Future of Personal Identity (long)

Google Hedonics

Solutions to World Problems

Why does anything exist? (quote)

State-Space of Background Assumptions

Personal Identity Joke

Getting closer to digital LSD

Psychedelic Perception of Visual Textures 2: Going Meta

On Triviality (quote)

State-Space of Drug Effects: Results

How to secretly communicate with people on LSD

Generalized Wada Test and the Total Order of Consciousness

State-space of drug effects

Psychophysics for Psychedelic Research: Textures (long)

I only vote for politicians who have used psychedelics. EOM.

Why not computing qualia?

David Pearce’s daily morning cocktail (2015) (quote)

Psychedelic Perception of Visual Textures

Should humans wipe out all carnivorous animals so the succeeding generations of herbivores can live in peace? (quote)

A workable solution to the problem of other minds

The fire that breathes reality into the equations of physics (quote)

Phenomenal Binding is incompatible with the Computational Theory of Mind

David Hamilton’s conversation with Alf Bruce about the nature of the mind (quote)

Manifolds of Consciousness: The emerging geometries of iterated local binding

The Real Tree of Life

Phenomenal puzzles – CIELAB

The psychedelic future of consciousness

Not zero-sum

Discussion of Fanaticism (quote)

What does comparatively matter in 2015?

Suffering: Not what your sober mind tells you (quote)

Reconciling memetics and religion.

The Reality of Basement Reality

The future of love

2014

And that’s why we can and cannot have nice things

Breaking the Thought Barrier: Ethics of Brain Computer Interfaces in the workplace

How bad does it get? (quote)

God in Buddhism

Practical metaphysics

Little known fun fact

Crossing borders (quote)

A simple mystical explanation

 


Bolded titles mean that the linked article is foundational: it introduces new concepts, vocabulary, heuristics, research methods, frameworks, and/or thought experiments that are important for the overall project of consciousness research. These tend to be articles that also discuss concepts in much greater depth than other articles.

The “long” tag means that the post has at least 4,000 words. Most of these long articles are in the 6,000 to 10,000 word range. The longest Qualia Computing article is the one about Burning Man which is about 13,500 words long (and also happens to be foundational as it introduces many new frameworks and concepts).

Quotes and transcripts are usually about: evolutionary psychology, philosophy of mind, ethics, neuroscience, physics, meditation, and/or psychedelic phenomenology. By far, David Pearce is the most quoted person on Qualia Computing.


Fast stats:

  • Total number of posts: 120
  • Foundational articles: 27
  • Articles over 4,000 words: 15
  • Original content: 73
  • Quotes and transcripts: 47

 

Would Maximally Efficient Work Be Fun?

Excerpt from Superintelligence: Paths, Dangers, Strategies (2014) by Nick Bostrom (pg. 207-210).

Would Maximally Efficient Work Be Fun?

One important variable in assessing the desirability of a hypothetical condition like this* is the hedonic state of the average emulation**. Would a typical emulation worker be suffering or would he be enjoying the experience of working hard on the task at hand?

We must resist the temptation to project our own sentiments onto the imaginary emulation worker. The question is not whether you would feel happy if you had to work constantly and never again spend time with your loved ones–a terrible fate, most would agree.

It is moderately more relevant to consider the current human average hedonic experience during working hours. Worldwide studies asking respondents how happy they are find that most rate themselves as “quite happy” or “very happy” (averaging 3.1 on a scale from 1 to 4)***. Studies on average affect, asking respondents how frequently they have recently experienced various positive or negative affective states, tend to get a similar result (producing a net affect of about 0.52 on a scale from -1 to 1). There is a modest positive effect of a country’s per capita income on average subjective well-being.**** However, it is hazardous to extrapolate from these findings to the hedonic state of future emulation workers. One reason that could be given for this is that their condition would be so different: on the one hand, they might be working much harder; on the other hand, they might be free from diseases, aches, hunger, noxious odors, and so forth. Yet such considerations largely miss the mark. The much more important consideration here is that hedonic tone would be easy to adjust through the digital equivalent of drugs or neurosurgery. This means that it would be a mistake to infer the hedonic state of future emulations from the external conditions of their lives by imagining how we ourselves and other people like us would feel in those circumstances. Hedonic state would be a matter of choice. In the model we are currently considering, the choice would be made by capital-owners seeking to maximize returns on their investment in emulation-workers. Consequently, this question of how happy emulations would feel boils down to the question of which hedonic states would be most productive (in the various jobs that emulations would be employed to do). [Emphasis mine]

Here, again, one might seek to draw an inference from observations about human happiness. If it is the case, across most times, places, and occupations, that people are typically at least moderately happy, this would create some presumption in favor of the same holding in a post-transition scenario like the one we are considering. To be clear, the argument in this case would not be that human minds have a predisposition towards happiness so they would probably find satisfaction under these novel conditions; but rather that a certain average level of happiness has proved adaptive for human minds in the past so maybe a similar level of happiness will prove adaptive from human-like minds in the future. Yet this formulations also reveals the weakness of the inference: to wit, that the mental dispositions that were adaptive for hunter-gatherer hominids roaming the African savanna may not necessarily be adaptive for modified emulations living in post-transition virtual realities. We can certainly hope that the future emulation-workers would be as happy as, or happier than, typical workers were in human history; but we have yet to see any compelling reason for supposing it would be so (in the laissez-faire multipolar scenario currently under examination).

Consider the possibility that the reason happiness is prevalent among humans (to whatever limited extent it is prevalent) is that cheerful mood served a signaling function in the environment of evolutionary adaptedness. Conveying the impression to other members of the social group of being in flourishing condition–in good health, in good standing with one’s peers, and in confident expectation of continued good fortune–may have boosted an individual’s popularity. A bias toward cheerfulness could thus have been selected for, with the result that human neurochemistry is now biased toward positive affect compared to what would have been maximally efficient according to simpler materialistic criteria. If this were the case, then the future of joie de vivre might depend on cheer retaining its social signaling function unaltered in the post-transition world: an issue to which we will return shortly. 

What if glad souls dissipate more energy than glum ones? Perhaps the joyful are more prone to creative leaps and flights of fancy–behaviors that future employers might disprize in most of their workers. Perhaps a sullen or anxious fixation on simply getting on with the job without making mistakes will be the productivity-maximizing attitude in most lines of work. The claim here is not that this is so, but that we do not know that it is not so. Yet we should consider just how bad it could be if some such pessimistic hypothesis about a future Malthusian state turned out to be true: not only because of the opportunity cost of having failed to create something better–which would be enormous–but also because the state could be bad in itself, possibly far worse that the original Malthusian state.

We seldom put forth full effort. When we do, it is sometimes painful. Imagine running on a treadmill at a steep incline–heart pounding, muscles aching, lungs gasping for air. A glance at the timer: your next break, which will will also be your death, is due in 49 years, 3 months, 20 days, 4 hours, 56 minutes, and 12 seconds. You wish you had not been born.

Again the claim is not that this is how it would be, but that we do not know that it is not. One could certainly make a more optimistic case. For example, there is no obvious reason that emulations would need to suffer bodily injury and sickness: the elimination of physical wretchedness would be a great improvement over the present state of affairs. Furthermore, since such stuff as virtual reality is made of can be fairly cheap, emulations may work in sumptuous surroundings–in splendid mountaintop palaces, on terraces set in a budding spring forest, or on the beaches of azure lagoon–with just the right illumination, temperature, scenery and décor; free from annoying fumes, noises, drafts, and buzzing insects; dressed in comfortable clothing, feeling clean and focused, and well nourished. More significantly, if–as seems perfectly possible–the optimum human mental state for productivity in most jobs is one of joyful eagerness, then the era of the emulation economy could be quite paradisiacal.

There would, in any case, be a great option value in arranging matters in such a manner that somebody or something could intervene to set things right if the default trajectory should happen to veer toward dystopia. It could also be desirable to have some sort of escape hatch that would permit bailout into death and oblivion if the quality of life were to sink permanently below the level at which annihilation becomes preferable to continued existence.

Unconscious outsourcers?

In the long run, as the emulation era gives way to an artificial intelligence era (or if machine intelligence is attained directly via AI without a preceding whole brain emulation stage) pain and pleasure might possibility disappear entirely in a multipolar outcome, since a hedonic reward mechanism may not be the most effective motivation system for a complex artificial agent (one that, unlike the human mind, is not burdened with the legacy of animal wetware). Perhaps a more advanced motivation system would be based on an explicit representation of a utility function or some other architecture that has not exact functional analogs to pleasure and pain.

A related but slightly more radical multipolar outcome–one that could involve the elimination of almost all value from the future–is that the universal proletariat would not even be conscious. This possibility is most salient with respect to AI, which might be structured very differently than human intelligence. But even if machine intelligence were initially achieved through whole brain emulation, resulting in conscious digital minds, the competitive forces unleashed in a post-transition economy could easily lead to the emergence of progressively less neuromorphic forms of machine intelligence, either because synthetic AI is created de novo or because the emulations would, through successive modifications and enhancements, increasingly depart their original human form.


* Scenarios where sentient emulations are being used to do maximally efficient work.

** Footnote: “An ethical evaluation might take into account many other factors as well. Even if all the workers were constantly well pleased with their condition, the outcome might still be deeply morally objectionable on other grounds–though which other grounds is a matter of dispute between rival moral theories. But any plausible assessment would consider subjective well-being to be one important factor. See also Bostrom and Yudkowsky (2015).”

*** World Values Survey (2008).

**** Helliwell and Sachs (2012).

The Universal Plot: Part I – Consciousness vs. Pure Replicators

“It seems plain and self-evident, yet it needs to be said: the isolated knowledge obtained by a group of specialists in a narrow field has in itself no value whatsoever, but only in its synthesis with all the rest of knowledge and only inasmuch as it really contributes in this synthesis toward answering the demand, ‘Who are we?'”

– Erwin Schrödinger in Science and Humanism (1951)

 

“Should you or not commit suicide? This is a good question. Why go on? And you only go on if the game is worth the candle. Now, the universe is been going on for an incredibly long time. Really, a satisfying theory of the universe should be one that’s worth betting on. That seems to me to be absolutely elementary common sense. If you make a theory of the universe which isn’t worth betting on… why bother? Just commit suicide. But if you want to go on playing the game, you’ve got to have an optimal theory for playing the game. Otherwise there’s no point in it.”

Alan Watts, talking about Camu’s claim that suicide is the most important question (cf. The Most Important Philosophical Question)

In this article we provide a novel framework for ethics which focuses on the perennial battle between wellbeing-oriented consciousness-centric values and valueless patterns who happen to be great at making copies of themselves (aka. Consciousness vs. Pure Replicators). This framework extends and generalizes modern accounts of ethics and intuitive wisdom, making intelligible numerous paradigms that previously lived in entirely different worlds (e.g. incongruous aesthetics and cultures). We place this worldview within a novel scale of ethical development with the following levels: (a) The Battle Between Good and Evil, (b) The Balance Between Good and Evil, (c) Gradients of Wisdom, and finally, the view that we advocate: (d) Consciousness vs. Pure Replicators. More so, we analyze each of these worldviews in light of our philosophical background assumptions and posit that (a), (b), and (c) are, at least in spirit, approximations to (d), except that they are less lucid, more confused, and liable to exploitation by pure replicators. Finally, we provide a mathematical formalization of the problem at hand, and discuss the ways in which different theories of consciousness may affect our calculations. We conclude with a few ideas for how to avoid particularly negative scenarios.

Introduction

Throughout human history, the big picture account of the nature, purpose, and limits of reality has evolved dramatically. All religions, ideologies, scientific paradigms, and even aesthetics have background philosophical assumptions that inform their worldviews. One’s answers to the questions “what exists?” and “what is good?” determine the way in which one evaluates the merit of beings, ideas, states of mind, algorithms, and abstract patterns.

Kuhn’s claim that different scientific paradigms are mutually unintelligible (e.g. consciousness realism vs. reductive eliminativism) can be extended to worldviews in a more general sense. It is unlikely that we’ll be able to convey the Consciousness vs. Pure Replicators paradigm by justifying each of the assumptions used to arrive to it one by one starting from current ways of thinking about reality. This is because these background assumptions support each other and are, individually, not derivable from current worldviews. They need to appear together as a unit to hang together tight. Hence, we now make the jump and show you, without further due, all of the background assumptions we need:

  1. Consciousness Realism
  2. Qualia Formalism
  3. Valence Structuralism
  4. The Pleasure Principle (and its corollary The Tyranny of the Intentional Object)
  5. Physicalism (in the causal sense)
  6. Open Individualism (also compatible with Empty Individualism)
  7. Universal Darwinism

These assumptions have been discussed in previous articles. In the meantime, here is a brief description: (1) is the claim that consciousness is an element of reality rather than simply the improper reification of illusory phenomena, such that your conscious experience right now is as much a factual and determinate aspect of reality as, say, the rest mass of an electron. In turn, (2) qualia formalism is the notion that consciousness is in principle quantifiable. Assumption (3) states that valence (i.e. the pleasure/pain axis, how good an experience feels) depends of the structure of such experience (more formally, on the properties of the mathematical object isomorphic to its phenomenology).

(4) is the assumption that people’s behavior is motivated by the pleasure-pain axis even when they think that’s not the case. For instance, people may explicitly represent the reason for doing things in terms of concrete facts about the circumstance, and the pleasure principle does not deny that such reasons are important. Rather, it merely says that such reasons are motivating because one expects/anticipates less negative valence or more positive valence. The Tyranny of the Intentional Object describes the fact that we attribute changes in our valence to external events and objects, and believe that such events and objects are intrinsically good (e.g. we think “icecream is great” rather than “I feel good when I eat icecream”).

Physicalism (5) in this context refers to the notion that the equations of physics fully describe the causal behavior of reality. In other words, the universe behaves according to physical laws and even consciousness has to abide by this fact.

Open Individualism (6) is the claim that we are all one consciousness, in some sense. Even though it sounds crazy at first, there are rigorous philosophical arguments in favor of this view. Whether this is true or not is, for the purpose of this article, less relevant than the fact that we can experience it as true, which happens to have both practical and ethical implications for how society might evolve.

Finally, (7) Universal Darwinism refers to the claim that natural selection works at every level of organization. The explanatory power of evolution and fitness landscapes generated by selection pressures is not confined to the realm of biology. Rather, it is applicable all the way from the quantum foam to, possibly, an ecosystem of universes.

The power of a given worldview is not only its capacity to explain our observations about the inanimate world and the quality of our experience, but also in its capacity to explain *in its own terms* the reasons for why other worldviews are popular as well. In what follows we will utilize these background assumptions to evaluate other worldviews.

 

The Four Worldviews About Ethics

The following four stages describe a plausible progression of thoughts about ethics and the question “what is valuable?” as one learns more about the universe and philosophy. Despite the similarity of the first three levels to the levels of other scales of moral development (e.g. this, this, this, etc.), we believe that the fourth level is novel, understudied, and very, very important.

1. The “Battle Between Good and Evil” Worldview

“Every distinction wants to become the distinction between good and evil.” – Michael Vassar (source)

Common-sensical notions of essential good and evil are pre-scientific. For reasons too complicated to elaborate on for the time being, the human mind is capable of evoking an agentive sense of ultimate goodness (and of ultimate evil).

maxresdefault

Good vs. Evil? God vs. the Devil?

Children are often taught that there are good people and bad people. That evil beings exist objectively, and that it is righteous to punish them and see them with scorn. On this level people reify anti-social behaviors as sins.

Essentializing good and evil, and tying it up to entities seems to be an early developmental stage of people’s conception of ethics, and many people end up perpetually stuck in here. Several religions (specially the Abrahamic ones) are often practiced in such a way so as to reinforce this worldview. That said, many ideologies take advantage of the fact that a large part of the population is at this level to recruit adherents by redefining “what good and bad is” according to the needs of such ideologies. As a psychological attitude (rather than as a theory of the universe), reactionary and fanatical social movements often rely implicitly on this way of seeing the world, where there are bad people (jews, traitors, infidels, over-eaters, etc.) who are seen as corrupting the soul of society and who deserve to have their fundamental badness exposed and exorcised with punishment in front of everyone else.

15d8a1999197da27374e911b1cba769a--satan-god-is-good

Traditional notions of God vs. the Devil can be interpreted as the personification of positive and negative valence

Implicitly, this view tends to gain psychological strength from the background assumptions of Closed Individualism (which allows you to imagine that people can be essentially bad). Likewise, this view tends to be naïve about the importance of valence in ethics. Good feelings are often interpreted as the result of being aligned with fundamental goodness, rather than as positive states of consciousness that happen to be triggered by a mix of innate and programmable things (including cultural identifications). More so, good feelings that don’t come in response to the preconceived universal order are seen as demonic and aberrant.

From our point of view (the 7 background assumptions above) we interpret this particular worldview as something that we might be biologically predisposed to buy into. Believing in the battle between good and evil was probably evolutionarily adaptive in our ancestral environment, and might reduce many frictional costs that arise from having a more subtle view of reality (e.g. “The cheaper people are to model, the larger the groups that can be modeled well enough to cooperate with them.” – Michale Vassar). Thus, there are often pragmatic reasons to adopt this view, specially when the social environment does not have enough resources to sustain a more sophisticated worldview. Additionally, at an individual level, creating strong boundaries around what is or not permissible can be helpful when one has low levels of impulse control (though it may come at the cost of reduced creativity).

On this level, explicit wireheading (whether done right or not) is perceived as either sinful (defying God’s punishment) or as a sort of treason (disengaging from the world). Whether one feels good or not should be left to the whims of the higher order. On the flipside, based on the pleasure principle it is possible to interpret the desire to be righteous as being motivated by high valence states, and reinforced by social approval, all the while the tyranny of the intentional object cloaks this dynamic.

It’s worth noting that cultural conservativism, low levels of the psychological constructs of Openness to Experience and Tolerance of Ambiguity , and high levels of Need for Closure, all predict getting stuck in this worldview for one’s entire life.

2. The “Balance Between Good and Evil” Worldview

TVTropes has a great summary of the sorts of narratives that express this particular worldview and I highly recommend reading that article to gain insight into the moral attitudes compatible with this view. For example, here are some reasons why Good cannot or should not win:

Good winning includes: the universe becoming boring, society stagnating or collapsing from within in the absence of something to struggle against or giving people a chance to show real nobility and virtue by risking their lives to defend each other. Other times, it’s enforced by depicting ultimate good as repressive (often Lawful Stupid), or by declaring concepts such as free will or ambition as evil. In other words “too much of a good thing”.

Balance Between Good and Evil by tvtropes

Now, the stated reasons why people might buy into this view are rarely their true reasons. Deep down, the Balance Between Good and Evil is adopted because: people want to differentiate themselves from those who believe in (1) to signal intellectual sophistication, they experience learned helplessness after trying to defeat evil without success (often in the form of resilient personal failings or societal flaws), they find the view compelling at an intuitive emotional level (i.e. they have internalized the hedonic treadmill and project it onto the rest of reality).

In all of these cases, though, there is something somewhat paradoxical about holding this view. And that is that people report that coming to terms with the fact that not everything can be good is itself a cause of relief, self-acceptance, and happiness. In other words, holding this belief is often mood-enhancing. One can also confirm the fact that this view is emotionally load-bearing by observing the psychological reaction that such people have to, for example, bringing up the Hedonistic Imperative (which asserts that eliminating suffering without sacrificing anything of value is scientifically possible), indefinite life extension, or the prospect of super-intelligence. Rarely are people at this level intellectually curious about these ideas, and they come up with excuses to avoid looking at the evidence, however compelling it may be.

For example, some people are lucky enough to be born with a predisposition to being hyperthymic (which, contrary to preconceptions, does the opposite of making you a couch potato). People’s hedonic set-point is at least partly genetically determined, and simply avoiding some variants of the SCN9A gene with preimplantation genetic diagnosis would greatly reduce the number of people who needlessly suffer from chronic pain.

But this is not seen with curious eyes by people who hold this or the previous worldview. Why? Partly this is because it would be painful to admit that both oneself and others are stuck in a local maxima of wellbeing and that examining alternatives might yield very positive outcomes (i.e. omission bias). But at its core, this willful ignorance can be explained as a consequence of the fact that people at this level get a lot of positive valence from interpreting present and past suffering in such a way that it becomes tied to their core identity. Pride in having overcome their past sufferings, and personal attachment to their current struggles and anxieties binds them to this worldview.

If it wasn’t clear from the previous paragraph, this worldview often requires a special sort of chronic lack of self-insight. It ultimately relies on a psychological trick. One never sees people who hold this view voluntarily breaking their legs, taking poison, or burning their assets to increase the goodness elsewhere as an act of altruism. Instead, one uses this worldview as a mood-booster, and in practice, it is also susceptible to the same sort of fanaticism as the first one (although somewhat less so). “There can be no light without the dark. And so it is with magic. Myself, I always try to live within the light.” – Horace Slughorn.

315eab27545ea96c67953c54358fe600Additionally, this view helps people rationalize the negative aspects of one’s community and culture. For example, it not uncommon for people to say that buying factory farmed meat is acceptable on the grounds that “some things have to die/suffer for others to live/enjoy life.” Balance Between Good and Evil is a close friend of status quo bias.

Hinduism, Daoism, and quite a few interpretations of Buddhism work best within this framework. Getting closer to God and ultimate reality is not done by abolishing evil, but by embracing the unity of all and fostering a healthy balance between health and sickness.

It’s also worth noting that the balance between good and evil tends to be recursively applied, so that one is not able to “re-define our utility function from ‘optimizing the good’ to optimizing ‘the balance of good and evil’ with a hard-headed evidence-based consequentialist approach.” Indeed, trying to do this is then perceived as yet another incarnation of good (or evil) which needs to also be balanced with its opposite (willful ignorance and fuzzy thinking). One comes to the conclusion that it is the fuzzy thinking itself that people at this level are after: to blur reality just enough to make it seem good, and to feel like one is not responsible for the suffering in the world (specially by inaction and lack of thinking clearly about how one could help). “Reality is only a Rorschach ink-blot, you know” – Alan Watts. So this becomes a justification for thinking less than one really has to about the suffering in the world. Then again, it’s hard to blame people for trying to keep the collective standards of rigor lax, given the high proportion of fanatics who adhere to the “battle between good and evil” worldview, and who will jump the gun to demonize anyone who is slacking off and not stressed out all the time, constantly worrying about the question “could I do more?”

(Note: if one is actually trying to improve the world as much as possible, being stressed out about it all the time is not the right policy).

3. The “Gradients of Wisdom” Worldview

David Chapman’s HTML book Meaningness might describe both of the previous worldviews as variants of eternalism. In the context of his work, eternalism refers to the notion that there is an absolute order and meaning to existence. When applied to codes of conduct, this turns into “ethical eternalism”, which he defines as: “the stance that there is a fixed ethical code according to which we should live. The eternal ordering principle is usually seen as the source of the code.” Chapman eloquently argues that eternalism has many side effects, including: deliberate stupidity, attachment to abusive dynamics, constant disappointment and self-punishment, and so on. By realizing that, in some sense, no one knows what the hell is going on (and those who do are just pretending) one takes the first step towards the “Gradients of Wisdom” worldview.

At this level people realize that there is no evil essence. Some might talk about this in terms of there “not being good or bad people”, but rather just degrees of impulse control, knowledge about the world, beliefs about reality, emotional stability, and so on. A villain’s soul is not connected to some kind of evil reality. Rather, his or her actions can be explained by the causes and conditions that led to his or her psychological make-up.

Sam Harris’ ideas as expressed in The Moral Landscape evoke this stage very clearly. Sam explains that just as health is a fuzzy but important concept, so is psychological wellbeing, and that for such a reason we can objectively assess cultures as more or less in agreement with human flourishing. the-science-of-morality-7-728

Indeed, many people who are at this level do believe in valence structuralism, where they recognize that there are states of consciousness that are inherently better in some intrinsic subjective value sense than others.

However, there is usually no principled framework to assess whether a certain future is indeed optimal or not. There is little hard-headed discussion of population ethics for fear of sounding unwise or insensitive. And when push comes to shove, they lack good arguments to decisively rule out why particular situations might be bad. In other words, there is room for improvement, and such improvement might eventually come from more rigor and bullet-bitting.  In particular, a more direct examination of the implications of: Open Individualism, the Tyranny of the Intentional Object, and Universal Darwinism can allow someone on this level to make a breakthrough. Here is where we come to:

4. The “Consciousness vs. Pure Replicators” Worldview

In Wireheading Done Right we introduced the concept of a pure replicator:

I will define a pure replicator, in the context of agents and minds, to be an intelligence that is indifferent towards the valence of its conscious states and those of others. A pure replicator invests all of its energy and resources into surviving and reproducing, even at the cost of continuous suffering to themselves or others. Its main evolutionary advantage is that it does not need to spend any resources making the world a better place.

Presumably our genes are pure replicators. But we, as sentient minds who recognize the intrinsic value (both positive and negative) of conscious experiences, are not pure replicators. Thanks to a myriad of fascinating dynamics, it so happened that making minds who love, appreciate, think creatively, and philosophize was a side effect of the process of refining the selfishness of our genes. We must not take for granted that we are more than pure replicators ourselves, and that we care both about our wellbeing and the wellbeing of others. The problem now is that the particular selection pressures that led to this may not be present in the future. After all, digital and genetic technologies are drastically changing the fitness landscape for patterns that are good at making copies of themselves.

In an optimistic scenario, future selection pressures will make us all naturally gravitate towards super-happiness. This is what David Pearce posits in his essay “The Biointelligence Explosion”:

As the reproductive revolution of “designer babies” gathers pace, prospective parents will pre-select alleles and allelic combinations for a new child in anticipation of their behavioural effects – a novel kind of selection pressure to replace the “blind” genetic roulette of natural selection. In time, routine embryo screening via preimplantation genetic diagnosis will be complemented by gene therapy, genetic enhancement and then true designer zygotes. In consequence, life on Earth will also become progressively happier as the hedonic treadmill is recalibrated. In the new reproductive era, hedonic set-points and intelligence alike will be ratcheted upwards in virtue of selection pressure. For what parent-to-be wants to give birth to a low-status depressive “loser”? Future parents can enjoy raising a normal transhuman supergenius who grows up to be faster than Usain Bolt, more beautiful than Marilyn Monroe, more saintly than Nelson Mandela, more creative than Shakespeare – and smarter than Einstein.

– David Pearce in The Biointelligence Explosion

In a pessimistic scenario, the selection pressures lead to the opposite direction, where negative experiences are the only states of consciousness that happen to be evolutionarily adaptive, and so they become universally used.

There is a number of thinkers and groups who can be squarely placed on this level, and relative to the general population, they are extremely rare (see: The Future of Human Evolution,  A Few Dystopic Future Scenarios,  Book Review: Age of EM, Nick Land’s Gnon, Spreading Happiness to the Stars Seems Little Harder than Just Spreading, etc.). See also**. What is much needed now, is formalizing the situation and working out what we could do about it. But first, some thoughts about the current state of affairs.

There is at least some encouraging facts that suggest it is not too late to prevent a pure replicator takeover. There are memes, states of consciousness, and resources that can be used in order to steer evolution in a positive directions. In particular, as of 2017:

  1. A very big proportion of the economy is dedicated to trading positive experiences for money, rather than just survival or power tools. Thus an economy of information about states of consciousness is still feasible.
  2. There is a large fraction of the population who is altruistic and would be willing to cooperate with the rest of the world to avoid catastrophic scenarios.
  3. Happy people are more motivated, productive, engaged, and ultimately, economically useful (see hyperthimic temperament).
  4. Many people have explored Open Individualism and are interested (or at least curious) about the idea that we are all one.
  5. A lot of people are fascinated by psychedelics and the non-ordinary states of consciousness that they induce.
  6. MDMA-like consciousness is both very positive in terms of its valence, but also, amazingly, extremely pro-social, and future sustainable versions of it could be recruited to stabilize societies where the highest value is the collective wellbeing.

It is important to not underestimate the power of the facts laid out above. If we get our act together and create a Manhattan Project of Consciousness we might be able to find sustainable, reliable, and powerful methods that stabilize a hyper-motivated, smart, super-happy and super-prosocial state of consciousness in a large fraction of the population. In the future, we may all by default identify with consciousness itself rather than with our bodies (or our genes), and be intrinsically (and rationally) motivated to collaborate with everyone else to create as much happiness as possible as well as to eradicate suffering with technology. And if we are smart enough, we might also be able to solidify this state of affairs, or at least shield it against pure replicator takeovers.

The beginnings of that kind of society may already be underway. Consider for example the contrast between Burning Man and Las Vegas. Burning Man is a place that works as a playground for exploring post-Darwinean social dynamics, in which people help each other overcome addictions and affirm their commitment to helping all of humanity. Las Vegas, on the other hand, might be described as a place that is filled to the top with pure replicators in the forms of memes, addictions, and denial. The present world has the potential for both kind of environments, and we do not yet know which one will outlive the other in the long run.

Formalizing the Problem

We want to specify the problem in a way that will make it mathematically intelligible. In brief, in this section we focus on specifying what it means to be a pure replicator in formal terms. Per the definition, we know that pure replicators will use resources as efficiently as possible to make copies of themselves, and will not care about the negative consequences of their actions. And in the context of using brains, computers, and other systems whose states might have moral significance (i.e. they can suffer), they will simply care about the overall utility of such systems for whatever purpose they may require. Such utility will be a function of both the accuracy with which the system performs it’s task, as well as its overall efficiency in terms of resources like time, space, and energy.

Simply phrased, we want to be able to answer the question: Given a certain set of constraints such as energy, matter, and physical conditions (temperature, radiation, etc.), what is the amount of pleasure and pain involved in the most efficient implementation of a given predefined input-output mapping?

system_specifications

The image above represents the relevant components of a system that might be used for some purpose by an intelligence. We have the inputs, the outputs, the constraints (such as temperature, materials, etc.) and the efficiency metrics. Let’s unpack this. In the general case, an intelligence will try to find a system with the appropriate trade-off between efficiency and accuracy. We can wrap up this as an “efficiency metric function”, e(o|i, s, c) which encodes the following meaning: “e(o|i, s, c) = the efficiency with which a given output is generated given the input, the system being used, and the physical constraints in place.”

basic_system

Now, we introduce the notion of the “valence for the system given a particular input” (i.e. the valence for the system’s state in response to such an input). Let’s call this v(s|i). It is worth pointing out that whether valence can be computed, and whether it is even a meaningfully objective property of a system is highly controversial (e.g. “Measuring Happiness and Suffering“). Our particular take (at QRI) is that valence is a mathematical property that can be decoded from the mathematical object whose properties are isomorphic to a system’s phenomenology (see: Principia Qualia: Part II – Valence, and also Quantifying Bliss). If so, then there is a matter of fact about just how good/bad an experience is. For the time being we will assume that valence is indeed quantifiable, given that we are working under the premise of valence structuralism (as stated in our list of assumptions). We thus define the overall utility for a given output as U(e(o|i, s, c), v(s|i)), where the valence of the system may or may not be taken into account. In turn, an intelligence is said to be altruistic if it cares about the valence of the system in addition to its efficiency, so that it’s utility function penalizes negative valence (and rewards positive valence).

valence_altruism

Now, the intelligence (altruistic or not) utilizing the system will also have to take into account the overall range of inputs the system will be used to process in order to determine how valuable the system is overall. For this reason, we define the expected value of the system as the utility of each input multiplied by its probability.

input_probabilities

(Note: a more complete formalization would also weight in the importance of each input-output transformation, in addition to their frequency). Moving on, we can now define the overall expected utility for the system given the distribution of inputs it’s used for, its valence, its efficiency metrics, and its constraints as E[U(s|v, e, c, P(I))]:

chosen_system

The last equation shows that the intelligence would choose the system that maximizes E[U(s|v, e, c, P(I))].

Pure replicators will be better at surviving as long as the chances of reproducing do not depend on their altruism. If altruism does not reduce such reproductive fitness, then:

Given two intelligences that are competing for existence and/or resources to make copies of themselves and fight against other intelligences, there is going to be a strong incentive to choose a system that maximizes the efficiency metrics regardless of the valence of the system.

In the long run, then, we’d expect to see only non-altruistic intelligences (i.e. intelligences with utility functions that are indifferent to the valence of the systems it uses to process information). In other words, as evolution pushes intelligences to optimize the efficiency metrics of the systems they employ, it also pushes them to stop caring about the wellbeing of such systems. In other words, evolution pushes intelligences to become pure replicators in the long run.

Hence we should ask: How can altruism increase the chances of reproduction? A possibility would be for the environment to reward entities that are altruistic. Unfortunately, in the long run we might see that environments that reward altruistic entities produce less efficient entities than environments that don’t. If there are two very similar environments, one which rewards altruism and one which doesn’t, the efficiency of the entities in the latter might become so much higher than in the former that they become able to takeover and destroy whatever mechanism is implementing such reward for altruism in the former. Thus, we suggest to find environments in which rewarding altruism is baked into their very nature, such that similar environments without such reward either don’t exist or are too unstable to exist for the amount of time it takes to evolve non-altruistic entities. This and other similar approaches will be explored further in Part II.

Behaviorism, Functionalism, Non-Materialist Physicalism

A key insight is that the formalization presented above is agnostic about one’s theory of consciousness. We are simply assuming that it’s possible to compute the valence of the system in terms of its state. How one goes about computing such valence, though, will depend on how one maps physical systems to experiences. Getting into the weeds of the countless theories of consciousness out there would not be very productive at this stage, but there is still value in defining the rough outline of kinds of theories of consciousness. In particular, we categorize (physicalist) theories of consciousness in terms of the level of abstraction they identify as the place in which to look for consciousness.

Behaviorism and similar accounts simply associate consciousness to input-output mappings, which can be described, in Marr’s terms, as the computational level of abstraction. In this case, v(s|i) would not depend on the details of the system as much as in what it does from a third person point of view. Behaviorists don’t care what’s in the Chinese Room; all they care about is if the Chinese Room can scribble “I’m in pain” as an output. How we can formalize a mathematical equation to infer whether a system is suffering from a behaviorist point of view is beyond me, but maybe someone might want to give it a shot. As a side note, behaviorists historically were not very concerned about pain or pleasure, and there cause to believe that behaviorism itself might be anti-depressant for people for whom introspection results in more pain than pleasure.

Functionalism (along with computational theories of mind) defines consciousness as the sum-total of the functional properties of systems. In turn, this means that consciousness arises at the algorithmic level of abstraction. Contrary to common misconception, functionalists do care about how the Chinese Room is implemented: contra behaviorists, they do not usually agree that a Chinese Room implemented with a look-up table is conscious.*

As such v(s|i) will depend on the algorithms that the system is implementing. Thus, as an intermediary step, one would need a function that takes the system as an input and returns the algorithms that the system is implementing as an output, A(s). Only once we have A(s) we would then be able to infer the valence of the system. Which algorithms, and for what reason, are in fact hedonically-charged has yet to be clarified. Committed functionalists often associate reinforcement learning with pleasure and pain, and one could imagine that as philosophy of mind gets more rigorous and takes into account more advancements in neuroscience and AI, we will see more hypothesis being made about what kinds of algorithms result in phenomenal pain (and pleasure). There are many (still fuzzy) problems to be solved for this account to work even in principle. Indeed, there is a reason to believe that the question “what algorithms is this system performing?” has no definite answer, and it surely isn’t frame-invariant in the same way that a physical state might be. The fact that algorithms do not carve nature at its joints would imply that consciousness is not really a well-defined element of reality either. But rather than this working as a reductio-ad-absurdum of functionalism, many of its proponents have instead turned around to conclude that consciousness itself is not a natural kind. This does represent an important challenge in order to define the valence of the system, and makes the problem of detecting and avoiding pure replicators extra challenging. Admirably so, this is not stopping some from trying anyway.

We also should note that there are further problems with functionalism in general, including the fact that qualia, the binding problem, and the causal role of consciousness seem underivable from its premises. For a detailed discussion about this, read this article.

Finally, Non-Materialist Physicalism locates consciousness at the implementation level of abstraction. This general account of consciousness refers to the notion that the intrinsic nature of the physical is qualia. There are many related views that for the purpose of this article should be good enough approximations: panpsychism, panexperientialism, neutral monism, Russellian monism, etc. Basically, this view takes seriously both the equations of physics and the idea that what they describe is the behavior of qualia. A big advantage of this view is that there is a matter-of-fact about what a system is composed of. Indeed, both in relativity and quantum mechanics, the underlying nature of a system is frame-invariant, such that its fundamental (intrinsic and causal) properties do not depend on one’s frame of reference. In order to obtain v(s|i) we will need to obtain this frame-invariant description of what the system is in a given state. Thus, we need a function that takes as input physical measurements of the system and returns the best possible approximation to what is actually going on under the hood, Ph(s). And only with this function Ph(s) we would be ready to compute the valence of the system. Now, in practice we might not need a plank-length description of the system, since the mathematical property that describes it’s valence might turn out to be well-approximated with high-level features of it.

The main problem with Non-Materialist Physicalism comes when one considers systems that have similar efficiency metrics, are performing the same algorithms, and look the same in all of the relevant respects from a third-person point, and yet do not have the same experience. In brief: if physical rather than functional aspects of systems map to conscious experiences, it seems likely that we could find two systems that do the same (input-output mapping), do it in the same way (algorithms), and yet one is conscious and the other isn’t.

This kind of scenario is what has pushed many to conclude that functionalism is the only viable alternative, since at this point consciousness would seem epiphenomenal (e.g. Zombies Redacted). And indeed, if this was the case, it would seem to be a mere matter of chance that our brains are implemented with the right stuff to be conscious, since the nature of such stuff is not essential to the algorithms that actually end up processing the information. You cannot speak to stuff, but you can speak to an algorithm. So how do we even know we have the right stuff to be conscious?

The way to respond to this very valid criticism is for Non-Materialist Physicalism to postulate that bound states of consciousness have computational properties. In brief, epiphenomenalism cannot be true. But this does not rule out Non-Materialist Physicalism for the simple reason that the quality of states of consciousness might be involved in processing information. Enter…

The Computational Properties of Consciousness

Let’s leave behaviorism behind for the time being. In what ways do functionalism and non-materialist physicalism differ in the context of information processing? In the former, consciousness is nothing other than certain kinds of information processing, whereas in the latter conscious states can be used for information processing. An example of this falls out of taking David Pearce’s theory of consciousness seriously. In his account, the phenomenal binding problem (i.e. “if we are made of atoms, how come our experience contains many pieces of information at once?”, see: The Combination Problem for Panpsychism) is solved via quantum coherence. Thus, a given moment of consciousness is a definite physical system that works as a unit. Conscious states are ontologically unitary, and not merely functionally unitary.

If this is the case, there would be a good reason for evolution to recruit conscious states to process information. Simply put, given a set of constraints, using quantum coherence might be the most efficient way to solve some computational problems. Thus, evolution might have stumbled upon a computational jackpot by creating neurons whose (extremely) fleeting quantum coherence could be used to solve constraint satisfaction problems in ways that would be more energetically expensive to do otherwise. In turn, over many millions of years, brains got really good at using consciousness in order to efficiently process information. It is thus not an accident that we are conscious, that our conscious experiences are unitary, that our world-simulations use a wide range of qualia varieties, and so on. All of these seemingly random, seemingly epiphenomenal, aspects of our existence happen to be computationally advantageous. Just as using quantum computing for factorizing prime numbers, or for solving problems amenable to annealing might give quantum computers a computational edge over their non-quantum counterparts, so is using bound conscious experiences helpful to outcompete non-sentient animals.

Of course, there is yet no evidence of macroscopic decoherence and the brain is too hot anyway, so on the face of it Pearce’s theory seems exceedingly unlikely. But its explanatory power should not be dismissed out of hand, and the fact that it makes empirically testable predictions is noteworthy (how often do consciousness theorists make precise predictions to falsify their theories?).

Whether it is via quantum coherence, entanglement, invariants of the gauge field, or any other deep physical property of reality, non-materialist physicalism can avert the spectre of epiphenomenalism by postulating that the relevant properties of matter that make us conscious are precisely those that give our brains a computational edge (relative to what evolution was able to find in the vicinity of the fitness landscape explored in our history).

Will Pure Replicators Use Valence Gradients at All?

Whether we work under the assumption of functionalism or non-materialist physicalism, we already know that our genes found happiness and suffering to be evolutionary advantageous. So we know that there is at least a set of constraints, efficiency metrics, and input-output mappings that make both phenomenal pleasure and pain very good algorithms (functionalism) or physical implementations (non-materialist physicalism). But will the parameters necessitated by replicators in the long-term future have these properties? Remember that evolution was only able to explore a restricted state-space of possible brain implementations delimited by the pre-existing gene pool (and the behavioral requirements provided by the environment). So, in one extreme case, it may be the case that a fully optimized brain simply does not need consciousness to solve problems. And in another extreme, it may turn out that consciousness is extraordinarily more powerful when used in an optimal way. Would this be good or bad?

What’s the best case scenario? Well, the absolute best possible case is a case so optimistic and incredibly lucky that if it turned out to be true, it would probably make me believe in a benevolent God (or Simulation). This is the case where it turns out that only positive valence gradients are computationally superior to every other alternative given a set of constraints, input-output mappings, and arbitrary efficiency functions. In this case, the most powerful pure replicators, despite their lack of altruism, will nonetheless be pumping out massive amounts of systems that produce unspeakable levels of bliss. It’s as if the very nature of this universe is blissful… we simply happen to suffer because we are stuck in a tiny wrinkle at the foothills of the optimization process of evolution.

In the extreme opposite case, it turns out that only negative valence gradients offer strict computational benefits under heavy optimization. This would be Hell. Or at least, it would tend towards Hell in the long run. If this happens to be the universe we live in, let’s all agree to either conspire to prevent evolution from moving on, or figure out the way to turn it off. In the long term, we’d expect every being alive (or AI, upload, etc.) to be a zombie or a piece of dolorium. Not a fun idea.

In practice, it’s much more likely that both positive and negative valence gradients will be of some use in some contexts. Figuring out exactly which contexts these are might be both extremely important, and also extremely dangerous. In particular, finding out in advance which computational tasks make positive valence gradients a superior alternative to other methods of doing the relevant computations would inform us about the sorts of cultures, societies, religions, and technologies that we should be promoting in order to give this a push in the right direction (and hopefully out-run the environments that would make negative valence gradients adaptive).

Unless we create a Singleton early on, it’s likely that by default all future entities in the long-term future will be non-altruistic pure replicators. But it is also possible that there are multiple attractors (i.e. evolutionarily stable ecosystems) in which different computational properties of consciousness are adaptive. Thus the case for pushing our evolutionary history in the right direction right now before we give up.

 Coming Next: The Hierarchy of Cooperators

Now that we covered the four worldviews, formalized what it means to be a pure replicator, and analyzed the possible future outcomes based on the computational properties of consciousness (and of valence gradients in particular), we are ready to face the game of reality in its own terms.

Team Consciousness, we need to to get our act together. We need a systematic worldview, availability of states of consciousness, set of beliefs and practices to help us prevent pure replicator takeovers.

But we cannot do this as long as we are in the dark about the sorts of entities, both consciousness-focused and pure replicators, who are likely to arise in the future in response to the selection pressures that cultural and technological change are likely to produce. In Part II of The Universal Plot we will address this and more. Stay tuned…

 



* Rather, they usually claim that, given that a Chinese Room is implemented with physical material from this universe and subject to the typical constraints of this world, it is extremely unlikely that a universe-sized look-up table would be producing the output. Hence, the algorithms that are producing the output are probably highly complex and using information processing with human-like linguistic representations, which means that, by all means, the Chinese Room it very likely understanding what it is outputting.


** Related Work:

Here is a list of literature that points in the direction of Consciousness vs. Pure Replicators. There are countless more worthwhile references, but I think that these ones are about the best:

The Biointelligence Explosion (David Pearce), Meditations on Moloch (Scott Alexander), What is a Singleton? (Nick Bostrom), Coherent Extrapolated Volition (Eliezer Yudkowsky), Simulations of God (John Lilly), Meaningness (David Chapman), The Selfish Gene (Richard Dawkins), Darwin’s Dangerous Idea (Daniel Dennett), Prometheus Rising (R. A. Wilson).

Additionally, here are some further references that address important aspects of this worlview, although they are not explicitly trying to arrive at a big picture view of the whole thing:

Neurons Gone Wild (Kevin Simler), The Age of EM (Robin Hanson), The Mating Mind (Geoffrey Miller), Joyous Cosmology (Alan Watts), The Ego Tunnel (Thomas Metzinger), The Orthogonality Thesis (Stuart Armstrong)

 

What Makes Tinnitus, Depression, and the Sound of the Bay Area Rapid Transit (BART) so Awful: Dissonance

The Bay Area Rapid Transit (BART) is often criticized for its loudness. According to measurements made in 2010, the noise reaches up to 100 decibels, enough to cause permanent hearing loss in the long term. This is why you should always wear earplugs on the BART, which can decrease the volume by up to 30 or so decibels, making it tolerable and harmless.

And while pointing out that BART gets really loud is indeed important, I would claim that there is something even more important to note. Namely, that BART is not merely loud, but it is also distinctly dissonant. Talking only about the stretch that goes from Millbrae to Embarcadero, an analysis I conducted reveals that the single worst period of dissonance happens on the ride from Glen Park to Balboa Park (at around the 20 second mark after one starts). If you are curious to hear it, you can check it out for yourself here. That said, I do not recommend listening to that track on repeat for any length of time, as it may have a strong mood-diminishing effect.

Too bad that some of the beautiful patterns found at the entrance of the Balboa Park BART station are not equally matched by beautiful sounds in the actual ride:

12091469_917234378368720_6131694264447421239_o.jpg

Balboa Park has some beautiful visual patterns (useful for psychophysics).

Ultimately, dissonance might be much more important than loudness, insofar as it tracks the degree to which environmental sound directly impacts quality of life. Thus, in addition to metrics that track how loud cities are, it might be a good addition to our sound contamination measurements to incorporate a sort of “dissonance index” into our calculations.

A General Framework for Valence

At the Qualia Research Institute we have pointed that the connection between dissonance and valence may not be incidental. In particular, we suggest that it falls out as a possible implication of the Symmetry Theory of Valence (STV). The STV is itself a special case of the general principle we call Valence Structuralism, which claims that the degree to which an experience feels good or bad is a consequence of the structures of the object whose mathematical properties are isomorphic to a system’s phenomenology. The STV goes one step further and suggests that the relevant mathematical property that denotes valence is the symmetry of this object.

valence_structuralism

In Quantifying Bliss, we postulated that a general framework for describing the valence of an experience could be constructed in terms of Consonance-Dissonance-Noise Signatures (“CDNS” for short). That is, the degree to which the given states have consonance, dissonance, and noise in them. As an implication of the Symmetry Theory of Valence we postulate that consonance will directly track positive valence, dissonance negative valence, and noise neutral valence. But wait, there is more! Each of these “channels” themselves have a spectrum. That is to say, one could be experiencing high degrees of low-frequency-dissonance at the same time as high-frequency-consonance and maybe a general full-spectrum background noise. Any combination is possible.

The Quantifying Bliss article describes how recent advancements in neuroscience might be useful to quantify people’s CDNS (namely, using the pair-wise interactions between people’s connectome-specific harmonics).

Many Heads But Just One Body

Richard Wu has a good article on his experience with tinnitus. One of the things that stands out about it is the level of detail used to describe his tinnitus. At its worst, he says, he does not only experience a single sound, but several kinds at once:

By the way, its getting louder isn’t even the worst. Sometimes I develop an entirely new tinnitus. […] Today, I have three:

  1. A very high-pitched CRT monitor / TV-like screech (similar to the one in the video).
  2. A deep, low, powerful rumbling.
  3. A mid-tone that adjusts its volume based on external sounds. If my environment is loud, it will be loud; if my environment is quiet, it will ring more softly.

As in the case of the BART and how people complain about how loud it is while missing the most important piece (its dissonance), tinnitus may have a similar reporting problem. What makes tinnitus so unbearable might not be so much the fact that there is always a hallucinated sound present, but rather, that such a sound (or clusters of sounds) is so unpleasant, distracting, and oppressive. The actual texture of tinnitus may be just as, if not more, important than its mere presence.

We believe that Valence Structuralism and in particular the Symmetry Theory of Valence are powerful explanatory frameworks that can tie together a wide range of disparate phenomena concerning good and bad feelings. And if true, then for every unpleasant experience we may have, a reasonable thing to ask might be: in what way is this dissonant? For example: Depression may be a sort of whole-body low-frequency dissonance (similar to, but different in texture, to nausea). Anxiety, on the other hand, along with irritation and anger, might be a manifestation of high-frequency dissonance.

Likewise, whenever a good or pleasant feeling is found, a reasonable question to ask is: in what ways is this consonant? Let’s think about the three kinds of euphoria uncovered in State-Space of Drug Effects. Fast euphoria (stimulants, exercise, anticipation, etc.) might be what high-frequency consonance feels like. Slow euphoria (relaxation, opioids, etc.) might be what low-frequency consonance feels like. And what about spiritual euphoria (what you get by thinking about philosophy, tripping, and taking dissociatives)? Well, however trippy this may sound, it might well be that this is a sort of fractal consonance, in which multiple representations of various spatio-temporal resolutions become interlocked in a pleasant dance (which may, or may not, allow you to process information more efficiently).

Now what about noise? Here is where we place all of the blunting agents. The general explanation for why anti-depressants of the SSRI variety tend to blunt feelings might be because their very mechanism of action is to increase neuronal noise and thus reduce the signal-to-noise ratio. Crying, orgasm, joy, and ragegasms all share the quality of being highly symmetric harmonic states, and SSRIs having a generalized effect of adding noise to one’s neuronal environment would be expected to diminish the intensity (and textural orderliness) of each of these states. We also know that SSRIs are often capable of reducing the subjective intensity of tinnitus (and presumably the awfulness of BART sounds), which makes sense in this framework.

The STV would also explain MDMA’s effects as a generalized reduction in both dissonance and noise across the full spectrum, and a generalized increase in consonance, also across the full spectrum. This would clarify the missing link to explain why MDMA would be a potential tool to reduce tinnitus, not just emotional pain. The trick is that both perceptual dissonance and negative affect may have a common underlying quality: anti-symmetry. And MDMA being a chief symmetrifying agent takes it all away.

Many further questions remain: what makes meaningful experiences so emotionally rich? Why do some people enjoy weird sounds? Why is emo music so noisy? What kind of valence can be experienced when one’s consciousness has acquired a hyperbolic geometry? I will address these and many other interesting questions in future posts. Stay tuned!

Traps of the God Realm

From Opening the Heart of Compassion by Martin Lowenthal and Lar Short (pages 132-136).

Seeking Oneness

In this realm we want to be “one with the universe.” We are trying to return to a time when we felt no separation, when the world of our experience seemed to be the only world. We want to recover the experience and comfort of the womb. In the universe of the womb, everything was ours without qualification and was designed to support our existence and growth. Now we want the cosmos to be our womb, as if it were designed specifically for our benefit.

We want satisfaction to flow more easily, naturally and automatically. This seems less likely when we are enmeshed in the everyday affairs of the world. Therefore, we withdraw to the familiar world of what is ours, of what we can control, and of our domain of influence. We may even withdraw to a domain in the mind. Everything seems to come so much easier in the realm of thought, once we have achieved some modest control over our minds. Insulating ourselves from the troubles of others and of life, we get further seduced by the seeming limitlessness of this mental world. 

In this process of trance formation, we try to make every sound musical, every image a work of art, and every feeling pleasant. Blocking out all sources of irritation, we retreat to a self-proclaimed “higher” plane of being. We cultivate the “higher qualities of life,” not settling for a “mundane” life.

Masquerade of Higher Consciousness

The danger for those of us on a spiritual path is that the practices and the teachings can be enlisted to serve the realm rather than to dissolve our fixations and open us to truth. We discover that we can go beyond sensual pleasure and material beauty to refined states of consciousness. We achieve purely mental pleasures of increasing subtlety and learn how to maintain them for extended periods. We think we can maintain our new vanity and even expand it to include the entire cosmos, thus vanquishing change, old age, and death. Chogyam Trungpa Rinpoche called this process “spiritual materialism.”

For example, we use a sense of spaciousness to expand our consciousness by imposing our preconception of limitlessness on the cosmos. We see everything that we have created and “it is good.” Our vanity in the god realm elevates our self-image to the level of the divine–we feel capable of comprehending the universe and the nature of reality.

We move beyond our contemplation of limitless space, expanding our consciousness to include the very forces that create vast space. As the creator of vast space, we imagine that we have no boundaries, no limits, and no position. Our mind can now include everything. We find that we do not have concepts for such images and possibilities, so we think that the Divine or Essence must be not any particular thing we can conceive of, must be empty of conceptual characteristics.

Thus our vain consciousness, as the Divine, conceives that it has no particular location, is not anything in particular, and is itself beyond imagination. We arrive at the conclusion that even this attempt to comprehend emptiness is itself a concept, and that emptiness is devoid of inherent meaning. We shift our attention to the idea of being not not any particular thing. We then come to the glorious position that nothing can be truly stated, that nothing has inherent value. This mental understanding becomes our ultimate vanity. We take pride in it, identify as someone who “knows”, and adopt a posture in the world as someone who has journeyed into the ultimate nature of the unknown.

In this way we create more and more chains that bind us and limit our growth as we move ever inward. When we think we are becoming one with the universe, we are only achieving greater oneness with our own self-image. Instead of illuminating our ignorance, we expand its domain. We become ever more disconnected from others, from communication and true sharing, and from compassion. We subtly bind ourselves ever more tightly, even to the point of suffocation, under the guise of freedom in spaciousness.

Spiritual Masquerades of Teachers and Devoted Students

As we acquire some understanding and feel expansive, we may think we are God’s special gift to humanity, here to teach the truth. Although we may not acknowledge that we have something to prove, at some level we are trying to prove how supremely unique and important we are. Our spiritual life-style is our expression of that uniqueness and significance.

Spiritual teachers run a great danger of falling into the traps of the god realm. If a teacher has charisma and the ability to channel and radiate intense energy, this power may be misused to engender hope in students and to bind them in a dependent relationship. The true teacher undermines hope, teaches by the example of wisdom and compassion, and encourages students to be autonomous by investigating truth themselves, checking their own experience, and trusting their own results more than faith.

The teacher is not a god but a bridge to the unknown, a guide to the awareness qualities and energy capacities we want for our spiritual growth. The teacher, who is the same as we are, demonstrates what is possible in terms of aliveness and how to use the path of compassion to become free. In a sense, the teacher touches both aspects of our being: our everyday life of habits and feelings on the one hand and our awakened aliveness and wisdom on the other. While respect for and openness to the teacher are important for our growth and freedom, blind devotion fixates us on the person of the teacher. We then become confined by the limitations of the teacher’s personality rather than liberated by the teachings.

False Transcendence

Many characteristics of this realm–creative imagination, the tendency to go beyond assumed reality and individual perspectives, and the sense of expansiveness–are close to the underlying dynamic of wonderment. In wonder, we find the wisdom qualities of openness, true bliss, the realization of spaciousness within which all things arise, and alignment with universal principles. The god realm attitude results in superficial experiences that fit our preconceptions of realization but that lack the authenticity of wonder and the grounding in compassion and freedom.

Because the realm itself seems to offer transcendence, this is one of the most difficult realms to transcend. The heart posture of the realm propels us to transcend conflict and problems until we are comfortable. The desire for inner comfort, rather than for an authentic openness to the unknown, governs our quest. But many feelings arise during the true process of realization. At certain stages there is pain and disorientation, and at others a kind of bliss that may make us feel like we are going to burst (if there was something or someone to burst). When we settle for comfort we settle for the counterfeit of realization–the relief and pride we feel when we think we understand something.

Because we think that whatever makes us feel good is correct, we ignore disturbing events, information, and people and anything else that does not fit into our view of the world. We elevate ignorance to a form of bliss by excluding from our attention everything that is non-supportive.

Preoccupied with self, with grandiosity, and with the power and radiance of our own being, we resist the mystery of the unknown. When we are threatened by the unknown, we stifle the natural dynamic of wonder that arises in relation to all that is beyond our self-intoxication. We must either include vast space and the unknown within our sense of ourselves or ignore it because we do not want to feel insignificant and small. Our sense of awe before the forces of grace cannot be acknowledged for fear of invalidating our self-image.

Above the Law

According to our self-serving point of view, we are above the laws of nature and of humankind. We think that, as long as what we do seems reasonable to us, it is appropriate. We are accountable to ourselves and not to other people, the environment, or society. Human history is filled with examples of people in politics, business, and religion who demonstrated this attitude and caused enormous suffering.

Unlike the titans who struggle with death, we, as gods, know that death is not really real. We take comfort in the thought that “death is an illusion.” The only people who die are those who are stuck and have not come to the true inner place beyond time, change, and death. We may even believe that we have the potential to develop our bodies and minds to such a degree that we can reverse the aging process and become one of the “immortals.”

A man, walking on a beach, reaches down and picks up a pebble. Looking at the small stone in his hand, he feels very powerful and thinks of how with one stroke he has taken control of the stone. “How many years have you been here, and now I place you in my hand.” The pebble speaks to him, “Though to you, I am only a grain of sand in your hand, you, to me, are but a passing breeze.”

Avoid Runaway Signaling in Effective Altruism

 

Above: “Virtue Signaling” by Geoffrey Miller. This presentation was given at EAGlobal 2016 at the Berkeley campus.

For a good introduction to the EA movement, we suggest this amazing essay written by Scott Alexander from SlateStarCodex, which talks about his experience at EAGlobal 2017 in San Francisco (note: we were there too, and the essay briefly discusses our encounter with him).

We have previously discussed why valence research is so important to EA. In brief, we argue that in order to minimize suffering we need to actually unpack what it means for an experience to have low valence (ie. to feel bad). Unfortunately, modern affective neuroscience does not have a full answer to this question, but we believe that the approach that we use- at the Qualia Research Institute- has the potential to actually uncover the underlying equation for valence. We deeply support the EA cause and we think that it can only benefit from foundational consciousness research.

We’ve already covered some of the work by Geoffrey Miller (see this, this, and this). His sexual selection framework for understanding psychological traits is highly illuminating, and we believe that it will, ultimately, be a crucial piece of the puzzle of valence as well.

We think that in this video Geoffrey is making some key points about how society may perceive EAs which are very important to keep in mind as the movement grows. Here is a partial transcript of the video that we think anyone interested in EA should read (it covers 11:19-20:03):

So, I’m gonna run through the different traits that I think are the most relevant to EA issues. One is low intelligence versus high intelligence. This is a remarkably high intelligence crowd. And that’s good in lots of ways. Like you can analyze complex things better. A problem comes when you try to communicate findings to the people in the middle of the bell curve or even to the lower end. Those folks are the ones who are susceptible to buying books like “Homeopathic Care for Cats and Dogs” which is not evidence-based (your cat will die). Or giving to “Guide Dogs for the Blind”. And if you think “I’m going to explain my ethical system through Bayesian rationality” you might impress people, you might signal high IQ, but you might not convince them.

I think there is a particular danger of “runaway IQ-signaling” in EA. I’m relatively new to EA, I’m totally on board with what this community is doing, I think it’s awesome, it’s terrific… I’m very concerned that it doesn’t go the same path I’ve seen many other fields go, which is: when you have bright people, they start competing for status on the basis of brightness, rather than on the basis of actual contributions to the field.

IQ

So if you have elitist credentialism, like if your first question is “where did you go to school?”. Or “I take more Provigil than you, so I’m on a nootropics arms race”. Or you have exclusionary jargon that nobody can understand without Googling it. Or you’re skeptical about everything equally, because skepticism seems like a high IQ thing to do. Or you fetishize counter-intuitive arguments and results. These are problems. If your idea of a Trolley Problem involves twelve different tracks, then you’re probably IQ signaling.

runnaway_IQ_signaling

A key Big Five personality trait to worry about, or to think about consciously, is openness to experience. Low openness tends to be associated with drinking alcohol, voting Trump, giving to ineffective charities, standing for traditional family values, and being sexually inhibited. High openness to experience tends to be associated with, well, “I take psychedelics”, or “I’m libertarian”, or “I give to SCI”, or “I’m polyamorous”, or “casual sex is awesome”.

openness

Now, it’s weird that all these things come in a package (left), and that all these things come in a package (right), but that empirically seems to be the case.

openness_2Now, one issue here is that high openness is great- I’m highly open, and most of you guys are too- but what we don’t want to do is, try to sell people all the package and say “you can’t be EA unless you are politically liberal”, or “unless you are a Globalist”, or “unless you support unlimited immigration”, or “unless you support BDSM”, or “transhumanism”, or whatever… right, you can get into runaway openness signaling like the Social Justice Warriors do, and that can be quite counter-productive in terms of how your field operates and how it appears to others. If you are using rhetoric that just reactively disses all of these things [low openness attributes], be aware that you will alienate a lot of people with low openness. And you will alienate a lot of conservative business folks who have a lot of money who could be helpful.

Another trait is agreeableness. Kind of… kindness, and empathy, and sympathy. So low agreeableness- and this is the trait with the biggest sex difference on average, men are lower on agreeableness than women. Why? Because we did a bit more hunting, and stabbing each other, and eating meat. And high A tends to be more “cuddle parties”, and “voting for Clinton”, and “eating Tofu”, and “affirmative consent rather than Fifty Shades”. 

agreeableness

EA is a little bit weird because this community, from my observations, combines certain elements of high agreeableness- obviously, you guys care passionately about sentient welfare across enormous spans of time and space. But it also tends to come across, potentially, as low agreeableness, and that could be a problem. If you analyze ethical and welfare problems using just cold rationality, or you emphasize rationality- because you are mostly IQ signaling- it comes across to everyone outside EA as low agreeableness. As borderline sociopathic. Because traditional ethics and morality, and charity, is about warm heartedness, not about actually analyzing problems. So just be aware: this is a key personality trait that we have to be really careful about how we signal it. 

agreeableness_3

High agreeableness tends to be things like traditional charity, where you have a deontological perspective, sacred moral rules, sentimental anecdotes, “we’re helping people with this well on Africa that spins around, children push on it, awesome… whatever”. You focus on vulnerable cuteness, like charismatic megaphone if you are doing animal welfare. You focus on in-group loyalty, like “let’s help Americans before we help Africa”. That’s not very effective, but it’s highly compelling… emotionally… to most people, as a signal. And the stuff that EA tends to do, all of this: facing tough trade-offs, doing expected utility calculations, focusing on abstract sentience rather than cuteness… that can come across as quite cold-hearted.

agreeableness_2

EA so far, in my view- I haven’t run personality questionnaires on all of you, but my impression is- it tends to attract a fairly narrow range of cognitive and personality types. Obviously high IQ, probably the upper 5% of the bell curve. Very high openness, I doubt there are many Trump supporters here. I don’t know. Probably not. [Audience member: “raise your hands”. Laughs. Someone raises hands]. Uh oh, a lynching on the Berkeley campus. And in a way there might be a little bit of low agreeableness, combined with abstract concern for sentient welfare. It takes a certain kind of lack of agreeableness to even think in complex rational ways about welfare. And of course there is a fairly high proportion of nerds and geeks- i.e. Asperger’s syndrome- me as much as anybody else out here, with a focus on what Simon Baron-Cohen calls “systematizing” over “empathizing”. So if you think systematically, and you like making lists, and doing rational expected value calculations, that tends to be a kind of Aspie way to approaching things. The result is, if you make systematizing arguments, you will come across as Aspie, and that can be good or bad depending on the social context. If you do a hard-headed, or cold-hearted analysis of suffering, that also tends to signal so-called dark triad traits-narcissism, Machiavellianism, and sociopathy- and I know this is a problem socially, and sexually, for some EAs that I know! That they come across to others as narcissistic, Machiavellian, or sociopathic, even though they are actually doing more good in the world than the high agreeableness folks. 

explanatory_power_of_virtue_signaling

[Thus] I think virtue signaling helps explain why EA is prone to runaway signaling of intelligence and openness. So if you include a lot more math than you really strictly need to, or more intricate arguments, or more mind-bending counterfactuals, that might be more about signaling your own IQ than solving relevant problems. I think it can also explain, according to the last few slides, why EA concerns about tractability, globalism, and problem neglectedness can seem so weird, cold, and unappealing to many people.

explanatory_power_of_virtue_signaling_1