What is Love? Neural Annealing in the Presence of an Intentional Object

Excerpt from: The Neuroscience of Meditation: Four Models by Michael E. Johnson


Neural annealing: Annealing involves heating a metal above its recrystallization temperature, keeping it there for long enough for the microstructure of the metal to reach equilibrium, then slowly cooling it down, letting new patterns crystallize. This releases the internal stresses of the material, and is often used to restore ductility (plasticity and toughness) on metals that have been ‘cold-worked’ and have become very hard and brittle— in a sense, annealing is a ‘reset switch’ which allows metals to go back to a more pristine, natural state after being bent or stressed. I suspect this is a useful metaphor for brains, in that they can become hard and brittle over time with a build-up of internal stresses, and these stresses can be released by periodically entering high-energy states where a more natural neural microstructure can reemerge.

Furthermore, from what I gather from experienced meditators, successfully entering meditative flow may be one of the most reliable ways to reach these high-energy brain states. I.e., it’s very common for meditation to produce feelings of high intensity, at least in people able to actually enter meditative flow.* Meditation also produces more ‘pure’ or ‘neutral’ high-energy states, ones that are free of the intentional content usually associated with intense experiences which may distort or limit the scope of the annealing process. So we can think of intermediate-to-advanced (‘successful flow-state’) meditation as a reheating process, whereby the brain enters a more plastic and neutral state, releases pent-up structural stresses, and recrystallizes into a more balanced, neutral configuration as it cools. Iterated many times, this will drive an evolutionary process and will produce a very different brain, one which is more unified & anti-fragile, less distorted toward intentionality, and in general structurally optimized against stress.

An open question is how or why meditation produces high-energy brain states. There isn’t any consensus on this, but I’d offer with a nod to the predictive coding framework that bottom-up sense-data is generally excitatory, adding energy to the system, whereas top-down predictive Bayesian models are generally inhibitory, functioning as ‘energy sinks’. And so by ‘noting and knowing’ our sensations before our top-down models activate, in a sense we’re diverting the ‘energy’ of our sensations away from its usual counterbalancing force. If we do this long enough and skillfully enough, this energy can build up and lead to ‘entropic disintegration’, the prerequisite for annealing. (Thanks to Andrés for discussion here)

If this model is true, it feels very important for optimizing a meditation practice. E.g., we should try to figure out some rules of thumb for:

  • How to identify a high-energy brain state, in yourself and others, and how best to create them;
  • Things to do, and things not to do, during an annealing process (‘how to anneal the right things’);
  • Identifying tradeoffs in ‘cooling’ the brain quickly vs slowly.

Off the top of my head, I’d suggest that one of the worst things you could do after entering a high-energy brain state would be to fill your environment with distractions (e.g., watching TV, inane smalltalk, or other ‘low-quality patterns’). Likewise, it seems crucial to avoid socially toxic or otherwise highly stressful conditions. Most likely, going to sleep as soon as possible without breaking flow would be a good strategy to get the most out of a high-energy state. Avoiding strong negative emotions during such states seems important, as does managing your associations (psychedelics are another way to reach these high-energy states, and people have noticed there’s an ‘imprinting’ process where the things you think about and feel while high can leave durable imprints on how you feel after the trip). Finally, perhaps taking certain nootropics could help strengthen (or weaken) the magnitude of this annealing process.

Finally, to speculate a little about one of the deep mysteries of life, perhaps we can describe love as the result of a strong annealing process while under the influence of some pattern. I.e., evolution has primed us such that certain intentional objects (e.g. romantic partners) can trigger high-energy states where the brain smooths out its discontinuities/dissonances, such that given the presence of that pattern our brains are in harmony.[3] This is obviously a two-edged sword: on one hand it heals and renews our ‘cold-worked’ brain circuits and unifies our minds, but also makes us dependent: the felt-sense of this intentional object becomes the key which unlocks this state. (I believe we can also anneal to archetypes instead of specific people.)

Annealing can produce durable patterns, but isn’t permanent; over time, discontinuities creep back in as the system gets ‘cold-worked’. To stay in love over the long-term, a couple will need to re-anneal in the felt-presence of each other on a regular basis.[4] From my experience, some people have a natural psychological drive toward reflexive stability here: they see their partner as the source of goodness in their lives, so naturally they work hard to keep their mind aligned on valuing them. (It’s circular, but it works.) Whereas others are more self-reliant, exploratory, and restless, less prone toward these self-stable loops or annealing around external intentional objects in general. Whether or not, and within which precise contexts, someone’s annealing habits fall into this ‘reflexive stability attractor’ might explain much about e.g. attachment style, hedonic strategy, and aesthetic trajectory.

Links: Annealing (metallurgy)The entropic brain

[3] Anecdotally, the phenomenology of love-annealing is the object ‘feels beautiful from all angles’. This may imply that things (ideas, patterns, people) which are more internally coherent & invariant across contexts can produce stronger annealing effects — i.e. these things are more easy to fall deeply in love with given the same ‘annealing budget’, and this love is more durable.

[4] It’s important to note that both intense positive and intense negative experiences can push the brain into high-energy states; repeated annealing to negative emotions may serve many of the same functions as ‘positive annealing’, but also predispose brains to ‘sing in a minor key’ (see ‘kindling’).


Related Work: Algorithmic Reduction of Psychedelic States, Principia Qualia: Part II – Valence, and Ecstasy and Honesty


Image credit: Fabián Jiménez

Qualia Computing Media Appearances

Podcasts

The Future of Mind (Waking Cosmos, October 2018)

Consciousness, Qualia, and Psychedelics with Andres Gomez Emilsson (Catalyzing Coherence, May 2018)

Consciousness and Qualia Realism (Cosmic Tortoise, May 2018)

Robert Stark interviews Transhumanist Andres Gomez Emilsson (The Stark Truth with Robert Stark, October 2017)

Como el MDMA, pero sin la neurotoxicidad (Abolir el sufrimiento con Andrés Gómez) (Guía Escéptica [in Spanish], March 2016)

Happiness is Solving the World’s Problems (The World Transformed, January 2016)

Presentations

Quantifying Valence (see alsoThe Science of Consciousness, April 2018)

Quantifying Bliss (Consciousness Hacking, June 2017)

Utilitarian Temperament: Satisfying Impactful Careers (BIL Oakland 2016: The Recession Generation, July 2016)

Interviews

Want a Penfield Mood Organ? This Scientist Might Be Able to Help (Ziff Davis PCMag, April 2018)

Frameworks for Consciousness – Andres Gomez Emilsson (Science, Technology & the Future by Adam Ford, March 2018)

Towards the Abolition of Suffering through Science (featuring David Pearce, Brian Tomasik, & Mike Johnson hosted by Adam Ford, August 2015)

The Mind of David Pearce (Stanford, December 2012)

Andrés Gómez Emilsson, el joven que grito espurio a Felipe Calderón (Cine Desbundo [in Spanish], October 2008)

Narrative Inclusions

On Consciousness, Qualia, Valence & Intelligence with Mike Johnson (Science, Technology, Future, October 2018)

Podcast with Daniel Ingram (Cosmic Tortoise [referenced at 2h22m], January 2018)

Fear and Loathing at Effective Altruism Global 2017 (Slate Star Codex, August 2017)

Transhumanist Proves Schrödinger’s Cat Experiment Isn’t Better on LSD (Inverse, October 2016)

High Performer: Die Renaissance des LSD im Silicon Valley (Wired Germany [in German], June 2015)

Come With Us If You Want To Live (Harper’s Magazine, January 2015)

David Pearce’s Social Media Posts (Hedwebpre-2014, 2014, 2015, 2016, 2017, 2018)

David Pearce at Stanford 2011 (Stanford Transhumanist Association, December 2011)

External Articles

Ending Suffering Is The Most Important Cause (IEET, September 2015)

This Is What I Mean When I Say ‘Consciousness’ (IEET, September 2015)

My Interest Shifted from Mathematics to Consciousness after a THC Experience (IEET, September 2015)

‘Spiritual/Philosophical’ is the Deepest, Highest, Most Powerful Dimension of Euphoria (IEET, September 2015)

Bios

H+pedia, ISI-S, The Transhuman Party, Decentralized AI Summit, Earth Sharing

Miscellaneous

Philosophy of Mind Stand-up Comedy (The Science of Consciousness,  April 2018)

Randal Koene vs. Andres Emilsson on The Binding Problem (Bay Area Futurists, Oakland CA, May 2016)


Note: I am generally outgoing, fun-loving, and happy to participate in podcasts, events, interviews, and miscellaneous activities. Feel free to invite me to your podcast/interview/theater/etc. I am flexible when it comes to content; anything I’ve written about in Qualia Computing is fair game for discussion. Infinite bliss!

The Universal Plot: Part I – Consciousness vs. Pure Replicators

“It seems plain and self-evident, yet it needs to be said: the isolated knowledge obtained by a group of specialists in a narrow field has in itself no value whatsoever, but only in its synthesis with all the rest of knowledge and only inasmuch as it really contributes in this synthesis toward answering the demand, ‘Who are we?'”

– Erwin Schrödinger in Science and Humanism (1951)

 

“Should you or not commit suicide? This is a good question. Why go on? And you only go on if the game is worth the candle. Now, the universe is been going on for an incredibly long time. Really, a satisfying theory of the universe should be one that’s worth betting on. That seems to me to be absolutely elementary common sense. If you make a theory of the universe which isn’t worth betting on… why bother? Just commit suicide. But if you want to go on playing the game, you’ve got to have an optimal theory for playing the game. Otherwise there’s no point in it.”

Alan Watts, talking about Camu’s claim that suicide is the most important question (cf. The Most Important Philosophical Question)

In this article we provide a novel framework for ethics which focuses on the perennial battle between wellbeing-oriented consciousness-centric values and valueless patterns who happen to be great at making copies of themselves (aka. Consciousness vs. Pure Replicators). This framework extends and generalizes modern accounts of ethics and intuitive wisdom, making intelligible numerous paradigms that previously lived in entirely different worlds (e.g. incongruous aesthetics and cultures). We place this worldview within a novel scale of ethical development with the following levels: (a) The Battle Between Good and Evil, (b) The Balance Between Good and Evil, (c) Gradients of Wisdom, and finally, the view that we advocate: (d) Consciousness vs. Pure Replicators. More so, we analyze each of these worldviews in light of our philosophical background assumptions and posit that (a), (b), and (c) are, at least in spirit, approximations to (d), except that they are less lucid, more confused, and liable to exploitation by pure replicators. Finally, we provide a mathematical formalization of the problem at hand, and discuss the ways in which different theories of consciousness may affect our calculations. We conclude with a few ideas for how to avoid particularly negative scenarios.

Introduction

Throughout human history, the big picture account of the nature, purpose, and limits of reality has evolved dramatically. All religions, ideologies, scientific paradigms, and even aesthetics have background philosophical assumptions that inform their worldviews. One’s answers to the questions “what exists?” and “what is good?” determine the way in which one evaluates the merit of beings, ideas, states of mind, algorithms, and abstract patterns.

Kuhn’s claim that different scientific paradigms are mutually unintelligible (e.g. consciousness realism vs. reductive eliminativism) can be extended to worldviews in a more general sense. It is unlikely that we’ll be able to convey the Consciousness vs. Pure Replicators paradigm by justifying each of the assumptions used to arrive to it one by one starting from current ways of thinking about reality. This is because these background assumptions support each other and are, individually, not derivable from current worldviews. They need to appear together as a unit to hang together tight. Hence, we now make the jump and show you, without further due, all of the background assumptions we need:

  1. Consciousness Realism
  2. Qualia Formalism
  3. Valence Structuralism
  4. The Pleasure Principle (and its corollary The Tyranny of the Intentional Object)
  5. Physicalism (in the causal sense)
  6. Open Individualism (also compatible with Empty Individualism)
  7. Universal Darwinism

These assumptions have been discussed in previous articles. In the meantime, here is a brief description: (1) is the claim that consciousness is an element of reality rather than simply the improper reification of illusory phenomena, such that your conscious experience right now is as much a factual and determinate aspect of reality as, say, the rest mass of an electron. In turn, (2) qualia formalism is the notion that consciousness is in principle quantifiable. Assumption (3) states that valence (i.e. the pleasure/pain axis, how good an experience feels) depends of the structure of such experience (more formally, on the properties of the mathematical object isomorphic to its phenomenology).

(4) is the assumption that people’s behavior is motivated by the pleasure-pain axis even when they think that’s not the case. For instance, people may explicitly represent the reason for doing things in terms of concrete facts about the circumstance, and the pleasure principle does not deny that such reasons are important. Rather, it merely says that such reasons are motivating because one expects/anticipates less negative valence or more positive valence. The Tyranny of the Intentional Object describes the fact that we attribute changes in our valence to external events and objects, and believe that such events and objects are intrinsically good (e.g. we think “icecream is great” rather than “I feel good when I eat icecream”).

Physicalism (5) in this context refers to the notion that the equations of physics fully describe the causal behavior of reality. In other words, the universe behaves according to physical laws and even consciousness has to abide by this fact.

Open Individualism (6) is the claim that we are all one consciousness, in some sense. Even though it sounds crazy at first, there are rigorous philosophical arguments in favor of this view. Whether this is true or not is, for the purpose of this article, less relevant than the fact that we can experience it as true, which happens to have both practical and ethical implications for how society might evolve.

Finally, (7) Universal Darwinism refers to the claim that natural selection works at every level of organization. The explanatory power of evolution and fitness landscapes generated by selection pressures is not confined to the realm of biology. Rather, it is applicable all the way from the quantum foam to, possibly, an ecosystem of universes.

The power of a given worldview is not only its capacity to explain our observations about the inanimate world and the quality of our experience, but also in its capacity to explain *in its own terms* the reasons for why other worldviews are popular as well. In what follows we will utilize these background assumptions to evaluate other worldviews.

 

The Four Worldviews About Ethics

The following four stages describe a plausible progression of thoughts about ethics and the question “what is valuable?” as one learns more about the universe and philosophy. Despite the similarity of the first three levels to the levels of other scales of moral development (e.g. this, this, this, etc.), we believe that the fourth level is novel, understudied, and very, very important.

1. The “Battle Between Good and Evil” Worldview

“Every distinction wants to become the distinction between good and evil.” – Michael Vassar (source)

Common-sensical notions of essential good and evil are pre-scientific. For reasons too complicated to elaborate on for the time being, the human mind is capable of evoking an agentive sense of ultimate goodness (and of ultimate evil).

maxresdefault

Good vs. Evil? God vs. the Devil?

Children are often taught that there are good people and bad people. That evil beings exist objectively, and that it is righteous to punish them and see them with scorn. On this level people reify anti-social behaviors as sins.

Essentializing good and evil, and tying it up to entities seems to be an early developmental stage of people’s conception of ethics, and many people end up perpetually stuck in here. Several religions (specially the Abrahamic ones) are often practiced in such a way so as to reinforce this worldview. That said, many ideologies take advantage of the fact that a large part of the population is at this level to recruit adherents by redefining “what good and bad is” according to the needs of such ideologies. As a psychological attitude (rather than as a theory of the universe), reactionary and fanatical social movements often rely implicitly on this way of seeing the world, where there are bad people (jews, traitors, infidels, over-eaters, etc.) who are seen as corrupting the soul of society and who deserve to have their fundamental badness exposed and exorcised with punishment in front of everyone else.

15d8a1999197da27374e911b1cba769a--satan-god-is-good

Traditional notions of God vs. the Devil can be interpreted as the personification of positive and negative valence

Implicitly, this view tends to gain psychological strength from the background assumptions of Closed Individualism (which allows you to imagine that people can be essentially bad). Likewise, this view tends to be naïve about the importance of valence in ethics. Good feelings are often interpreted as the result of being aligned with fundamental goodness, rather than as positive states of consciousness that happen to be triggered by a mix of innate and programmable things (including cultural identifications). More so, good feelings that don’t come in response to the preconceived universal order are seen as demonic and aberrant.

From our point of view (the 7 background assumptions above) we interpret this particular worldview as something that we might be biologically predisposed to buy into. Believing in the battle between good and evil was probably evolutionarily adaptive in our ancestral environment, and might reduce many frictional costs that arise from having a more subtle view of reality (e.g. “The cheaper people are to model, the larger the groups that can be modeled well enough to cooperate with them.” – Michale Vassar). Thus, there are often pragmatic reasons to adopt this view, specially when the social environment does not have enough resources to sustain a more sophisticated worldview. Additionally, at an individual level, creating strong boundaries around what is or not permissible can be helpful when one has low levels of impulse control (though it may come at the cost of reduced creativity).

On this level, explicit wireheading (whether done right or not) is perceived as either sinful (defying God’s punishment) or as a sort of treason (disengaging from the world). Whether one feels good or not should be left to the whims of the higher order. On the flipside, based on the pleasure principle it is possible to interpret the desire to be righteous as being motivated by high valence states, and reinforced by social approval, all the while the tyranny of the intentional object cloaks this dynamic.

It’s worth noting that cultural conservativism, low levels of the psychological constructs of Openness to Experience and Tolerance of Ambiguity , and high levels of Need for Closure, all predict getting stuck in this worldview for one’s entire life.

2. The “Balance Between Good and Evil” Worldview

TVTropes has a great summary of the sorts of narratives that express this particular worldview and I highly recommend reading that article to gain insight into the moral attitudes compatible with this view. For example, here are some reasons why Good cannot or should not win:

Good winning includes: the universe becoming boring, society stagnating or collapsing from within in the absence of something to struggle against or giving people a chance to show real nobility and virtue by risking their lives to defend each other. Other times, it’s enforced by depicting ultimate good as repressive (often Lawful Stupid), or by declaring concepts such as free will or ambition as evil. In other words “too much of a good thing”.

Balance Between Good and Evil by tvtropes

Now, the stated reasons why people might buy into this view are rarely their true reasons. Deep down, the Balance Between Good and Evil is adopted because: people want to differentiate themselves from those who believe in (1) to signal intellectual sophistication, they experience learned helplessness after trying to defeat evil without success (often in the form of resilient personal failings or societal flaws), they find the view compelling at an intuitive emotional level (i.e. they have internalized the hedonic treadmill and project it onto the rest of reality).

In all of these cases, though, there is something somewhat paradoxical about holding this view. And that is that people report that coming to terms with the fact that not everything can be good is itself a cause of relief, self-acceptance, and happiness. In other words, holding this belief is often mood-enhancing. One can also confirm the fact that this view is emotionally load-bearing by observing the psychological reaction that such people have to, for example, bringing up the Hedonistic Imperative (which asserts that eliminating suffering without sacrificing anything of value is scientifically possible), indefinite life extension, or the prospect of super-intelligence. Rarely are people at this level intellectually curious about these ideas, and they come up with excuses to avoid looking at the evidence, however compelling it may be.

For example, some people are lucky enough to be born with a predisposition to being hyperthymic (which, contrary to preconceptions, does the opposite of making you a couch potato). People’s hedonic set-point is at least partly genetically determined, and simply avoiding some variants of the SCN9A gene with preimplantation genetic diagnosis would greatly reduce the number of people who needlessly suffer from chronic pain.

But this is not seen with curious eyes by people who hold this or the previous worldview. Why? Partly this is because it would be painful to admit that both oneself and others are stuck in a local maxima of wellbeing and that examining alternatives might yield very positive outcomes (i.e. omission bias). But at its core, this willful ignorance can be explained as a consequence of the fact that people at this level get a lot of positive valence from interpreting present and past suffering in such a way that it becomes tied to their core identity. Pride in having overcome their past sufferings, and personal attachment to their current struggles and anxieties binds them to this worldview.

If it wasn’t clear from the previous paragraph, this worldview often requires a special sort of chronic lack of self-insight. It ultimately relies on a psychological trick. One never sees people who hold this view voluntarily breaking their legs, taking poison, or burning their assets to increase the goodness elsewhere as an act of altruism. Instead, one uses this worldview as a mood-booster, and in practice, it is also susceptible to the same sort of fanaticism as the first one (although somewhat less so). “There can be no light without the dark. And so it is with magic. Myself, I always try to live within the light.” – Horace Slughorn.

315eab27545ea96c67953c54358fe600Additionally, this view helps people rationalize the negative aspects of one’s community and culture. For example, it not uncommon for people to say that buying factory farmed meat is acceptable on the grounds that “some things have to die/suffer for others to live/enjoy life.” Balance Between Good and Evil is a close friend of status quo bias.

Hinduism, Daoism, and quite a few interpretations of Buddhism work best within this framework. Getting closer to God and ultimate reality is not done by abolishing evil, but by embracing the unity of all and fostering a healthy balance between health and sickness.

It’s also worth noting that the balance between good and evil tends to be recursively applied, so that one is not able to “re-define our utility function from ‘optimizing the good’ to optimizing ‘the balance of good and evil’ with a hard-headed evidence-based consequentialist approach.” Indeed, trying to do this is then perceived as yet another incarnation of good (or evil) which needs to also be balanced with its opposite (willful ignorance and fuzzy thinking). One comes to the conclusion that it is the fuzzy thinking itself that people at this level are after: to blur reality just enough to make it seem good, and to feel like one is not responsible for the suffering in the world (specially by inaction and lack of thinking clearly about how one could help). “Reality is only a Rorschach ink-blot, you know” – Alan Watts. So this becomes a justification for thinking less than one really has to about the suffering in the world. Then again, it’s hard to blame people for trying to keep the collective standards of rigor lax, given the high proportion of fanatics who adhere to the “battle between good and evil” worldview, and who will jump the gun to demonize anyone who is slacking off and not stressed out all the time, constantly worrying about the question “could I do more?”

(Note: if one is actually trying to improve the world as much as possible, being stressed out about it all the time is not the right policy).

3. The “Gradients of Wisdom” Worldview

David Chapman’s HTML book Meaningness might describe both of the previous worldviews as variants of eternalism. In the context of his work, eternalism refers to the notion that there is an absolute order and meaning to existence. When applied to codes of conduct, this turns into “ethical eternalism”, which he defines as: “the stance that there is a fixed ethical code according to which we should live. The eternal ordering principle is usually seen as the source of the code.” Chapman eloquently argues that eternalism has many side effects, including: deliberate stupidity, attachment to abusive dynamics, constant disappointment and self-punishment, and so on. By realizing that, in some sense, no one knows what the hell is going on (and those who do are just pretending) one takes the first step towards the “Gradients of Wisdom” worldview.

At this level people realize that there is no evil essence. Some might talk about this in terms of there “not being good or bad people”, but rather just degrees of impulse control, knowledge about the world, beliefs about reality, emotional stability, and so on. A villain’s soul is not connected to some kind of evil reality. Rather, his or her actions can be explained by the causes and conditions that led to his or her psychological make-up.

Sam Harris’ ideas as expressed in The Moral Landscape evoke this stage very clearly. Sam explains that just as health is a fuzzy but important concept, so is psychological wellbeing, and that for such a reason we can objectively assess cultures as more or less in agreement with human flourishing. the-science-of-morality-7-728

Indeed, many people who are at this level do believe in valence structuralism, where they recognize that there are states of consciousness that are inherently better in some intrinsic subjective value sense than others.

However, there is usually no principled framework to assess whether a certain future is indeed optimal or not. There is little hard-headed discussion of population ethics for fear of sounding unwise or insensitive. And when push comes to shove, they lack good arguments to decisively rule out why particular situations might be bad. In other words, there is room for improvement, and such improvement might eventually come from more rigor and bullet-bitting.  In particular, a more direct examination of the implications of: Open Individualism, the Tyranny of the Intentional Object, and Universal Darwinism can allow someone on this level to make a breakthrough. Here is where we come to:

4. The “Consciousness vs. Pure Replicators” Worldview

In Wireheading Done Right we introduced the concept of a pure replicator:

I will define a pure replicator, in the context of agents and minds, to be an intelligence that is indifferent towards the valence of its conscious states and those of others. A pure replicator invests all of its energy and resources into surviving and reproducing, even at the cost of continuous suffering to themselves or others. Its main evolutionary advantage is that it does not need to spend any resources making the world a better place.

Presumably our genes are pure replicators. But we, as sentient minds who recognize the intrinsic value (both positive and negative) of conscious experiences, are not pure replicators. Thanks to a myriad of fascinating dynamics, it so happened that making minds who love, appreciate, think creatively, and philosophize was a side effect of the process of refining the selfishness of our genes. We must not take for granted that we are more than pure replicators ourselves, and that we care both about our wellbeing and the wellbeing of others. The problem now is that the particular selection pressures that led to this may not be present in the future. After all, digital and genetic technologies are drastically changing the fitness landscape for patterns that are good at making copies of themselves.

In an optimistic scenario, future selection pressures will make us all naturally gravitate towards super-happiness. This is what David Pearce posits in his essay “The Biointelligence Explosion”:

As the reproductive revolution of “designer babies” gathers pace, prospective parents will pre-select alleles and allelic combinations for a new child in anticipation of their behavioural effects – a novel kind of selection pressure to replace the “blind” genetic roulette of natural selection. In time, routine embryo screening via preimplantation genetic diagnosis will be complemented by gene therapy, genetic enhancement and then true designer zygotes. In consequence, life on Earth will also become progressively happier as the hedonic treadmill is recalibrated. In the new reproductive era, hedonic set-points and intelligence alike will be ratcheted upwards in virtue of selection pressure. For what parent-to-be wants to give birth to a low-status depressive “loser”? Future parents can enjoy raising a normal transhuman supergenius who grows up to be faster than Usain Bolt, more beautiful than Marilyn Monroe, more saintly than Nelson Mandela, more creative than Shakespeare – and smarter than Einstein.

– David Pearce in The Biointelligence Explosion

In a pessimistic scenario, the selection pressures lead to the opposite direction, where negative experiences are the only states of consciousness that happen to be evolutionarily adaptive, and so they become universally used.

There is a number of thinkers and groups who can be squarely placed on this level, and relative to the general population, they are extremely rare (see: The Future of Human Evolution,  A Few Dystopic Future Scenarios,  Book Review: Age of EM, Nick Land’s Gnon, Spreading Happiness to the Stars Seems Little Harder than Just Spreading, etc.). See also**. What is much needed now, is formalizing the situation and working out what we could do about it. But first, some thoughts about the current state of affairs.

There is at least some encouraging facts that suggest it is not too late to prevent a pure replicator takeover. There are memes, states of consciousness, and resources that can be used in order to steer evolution in a positive directions. In particular, as of 2017:

  1. A very big proportion of the economy is dedicated to trading positive experiences for money, rather than just survival or power tools. Thus an economy of information about states of consciousness is still feasible.
  2. There is a large fraction of the population who is altruistic and would be willing to cooperate with the rest of the world to avoid catastrophic scenarios.
  3. Happy people are more motivated, productive, engaged, and ultimately, economically useful (see hyperthimic temperament).
  4. Many people have explored Open Individualism and are interested (or at least curious) about the idea that we are all one.
  5. A lot of people are fascinated by psychedelics and the non-ordinary states of consciousness that they induce.
  6. MDMA-like consciousness is both very positive in terms of its valence, but also, amazingly, extremely pro-social, and future sustainable versions of it could be recruited to stabilize societies where the highest value is the collective wellbeing.

It is important to not underestimate the power of the facts laid out above. If we get our act together and create a Manhattan Project of Consciousness we might be able to find sustainable, reliable, and powerful methods that stabilize a hyper-motivated, smart, super-happy and super-prosocial state of consciousness in a large fraction of the population. In the future, we may all by default identify with consciousness itself rather than with our bodies (or our genes), and be intrinsically (and rationally) motivated to collaborate with everyone else to create as much happiness as possible as well as to eradicate suffering with technology. And if we are smart enough, we might also be able to solidify this state of affairs, or at least shield it against pure replicator takeovers.

The beginnings of that kind of society may already be underway. Consider for example the contrast between Burning Man and Las Vegas. Burning Man is a place that works as a playground for exploring post-Darwinean social dynamics, in which people help each other overcome addictions and affirm their commitment to helping all of humanity. Las Vegas, on the other hand, might be described as a place that is filled to the top with pure replicators in the forms of memes, addictions, and denial. The present world has the potential for both kind of environments, and we do not yet know which one will outlive the other in the long run.

Formalizing the Problem

We want to specify the problem in a way that will make it mathematically intelligible. In brief, in this section we focus on specifying what it means to be a pure replicator in formal terms. Per the definition, we know that pure replicators will use resources as efficiently as possible to make copies of themselves, and will not care about the negative consequences of their actions. And in the context of using brains, computers, and other systems whose states might have moral significance (i.e. they can suffer), they will simply care about the overall utility of such systems for whatever purpose they may require. Such utility will be a function of both the accuracy with which the system performs it’s task, as well as its overall efficiency in terms of resources like time, space, and energy.

Simply phrased, we want to be able to answer the question: Given a certain set of constraints such as energy, matter, and physical conditions (temperature, radiation, etc.), what is the amount of pleasure and pain involved in the most efficient implementation of a given predefined input-output mapping?

system_specifications

The image above represents the relevant components of a system that might be used for some purpose by an intelligence. We have the inputs, the outputs, the constraints (such as temperature, materials, etc.) and the efficiency metrics. Let’s unpack this. In the general case, an intelligence will try to find a system with the appropriate trade-off between efficiency and accuracy. We can wrap up this as an “efficiency metric function”, e(o|i, s, c) which encodes the following meaning: “e(o|i, s, c) = the efficiency with which a given output is generated given the input, the system being used, and the physical constraints in place.”

basic_system

Now, we introduce the notion of the “valence for the system given a particular input” (i.e. the valence for the system’s state in response to such an input). Let’s call this v(s|i). It is worth pointing out that whether valence can be computed, and whether it is even a meaningfully objective property of a system is highly controversial (e.g. “Measuring Happiness and Suffering“). Our particular take (at QRI) is that valence is a mathematical property that can be decoded from the mathematical object whose properties are isomorphic to a system’s phenomenology (see: Principia Qualia: Part II – Valence, and also Quantifying Bliss). If so, then there is a matter of fact about just how good/bad an experience is. For the time being we will assume that valence is indeed quantifiable, given that we are working under the premise of valence structuralism (as stated in our list of assumptions). We thus define the overall utility for a given output as U(e(o|i, s, c), v(s|i)), where the valence of the system may or may not be taken into account. In turn, an intelligence is said to be altruistic if it cares about the valence of the system in addition to its efficiency, so that it’s utility function penalizes negative valence (and rewards positive valence).

valence_altruism

Now, the intelligence (altruistic or not) utilizing the system will also have to take into account the overall range of inputs the system will be used to process in order to determine how valuable the system is overall. For this reason, we define the expected value of the system as the utility of each input multiplied by its probability.

input_probabilities

(Note: a more complete formalization would also weight in the importance of each input-output transformation, in addition to their frequency). Moving on, we can now define the overall expected utility for the system given the distribution of inputs it’s used for, its valence, its efficiency metrics, and its constraints as E[U(s|v, e, c, P(I))]:

chosen_system

The last equation shows that the intelligence would choose the system that maximizes E[U(s|v, e, c, P(I))].

Pure replicators will be better at surviving as long as the chances of reproducing do not depend on their altruism. If altruism does not reduce such reproductive fitness, then:

Given two intelligences that are competing for existence and/or resources to make copies of themselves and fight against other intelligences, there is going to be a strong incentive to choose a system that maximizes the efficiency metrics regardless of the valence of the system.

In the long run, then, we’d expect to see only non-altruistic intelligences (i.e. intelligences with utility functions that are indifferent to the valence of the systems it uses to process information). In other words, as evolution pushes intelligences to optimize the efficiency metrics of the systems they employ, it also pushes them to stop caring about the wellbeing of such systems. In other words, evolution pushes intelligences to become pure replicators in the long run.

Hence we should ask: How can altruism increase the chances of reproduction? A possibility would be for the environment to reward entities that are altruistic. Unfortunately, in the long run we might see that environments that reward altruistic entities produce less efficient entities than environments that don’t. If there are two very similar environments, one which rewards altruism and one which doesn’t, the efficiency of the entities in the latter might become so much higher than in the former that they become able to takeover and destroy whatever mechanism is implementing such reward for altruism in the former. Thus, we suggest to find environments in which rewarding altruism is baked into their very nature, such that similar environments without such reward either don’t exist or are too unstable to exist for the amount of time it takes to evolve non-altruistic entities. This and other similar approaches will be explored further in Part II.

Behaviorism, Functionalism, Non-Materialist Physicalism

A key insight is that the formalization presented above is agnostic about one’s theory of consciousness. We are simply assuming that it’s possible to compute the valence of the system in terms of its state. How one goes about computing such valence, though, will depend on how one maps physical systems to experiences. Getting into the weeds of the countless theories of consciousness out there would not be very productive at this stage, but there is still value in defining the rough outline of kinds of theories of consciousness. In particular, we categorize (physicalist) theories of consciousness in terms of the level of abstraction they identify as the place in which to look for consciousness.

Behaviorism and similar accounts simply associate consciousness to input-output mappings, which can be described, in Marr’s terms, as the computational level of abstraction. In this case, v(s|i) would not depend on the details of the system as much as in what it does from a third person point of view. Behaviorists don’t care what’s in the Chinese Room; all they care about is if the Chinese Room can scribble “I’m in pain” as an output. How we can formalize a mathematical equation to infer whether a system is suffering from a behaviorist point of view is beyond me, but maybe someone might want to give it a shot. As a side note, behaviorists historically were not very concerned about pain or pleasure, and there cause to believe that behaviorism itself might be anti-depressant for people for whom introspection results in more pain than pleasure.

Functionalism (along with computational theories of mind) defines consciousness as the sum-total of the functional properties of systems. In turn, this means that consciousness arises at the algorithmic level of abstraction. Contrary to common misconception, functionalists do care about how the Chinese Room is implemented: contra behaviorists, they do not usually agree that a Chinese Room implemented with a look-up table is conscious.*

As such v(s|i) will depend on the algorithms that the system is implementing. Thus, as an intermediary step, one would need a function that takes the system as an input and returns the algorithms that the system is implementing as an output, A(s). Only once we have A(s) we would then be able to infer the valence of the system. Which algorithms, and for what reason, are in fact hedonically-charged has yet to be clarified. Committed functionalists often associate reinforcement learning with pleasure and pain, and one could imagine that as philosophy of mind gets more rigorous and takes into account more advancements in neuroscience and AI, we will see more hypothesis being made about what kinds of algorithms result in phenomenal pain (and pleasure). There are many (still fuzzy) problems to be solved for this account to work even in principle. Indeed, there is a reason to believe that the question “what algorithms is this system performing?” has no definite answer, and it surely isn’t frame-invariant in the same way that a physical state might be. The fact that algorithms do not carve nature at its joints would imply that consciousness is not really a well-defined element of reality either. But rather than this working as a reductio-ad-absurdum of functionalism, many of its proponents have instead turned around to conclude that consciousness itself is not a natural kind. This does represent an important challenge in order to define the valence of the system, and makes the problem of detecting and avoiding pure replicators extra challenging. Admirably so, this is not stopping some from trying anyway.

We also should note that there are further problems with functionalism in general, including the fact that qualia, the binding problem, and the causal role of consciousness seem underivable from its premises. For a detailed discussion about this, read this article.

Finally, Non-Materialist Physicalism locates consciousness at the implementation level of abstraction. This general account of consciousness refers to the notion that the intrinsic nature of the physical is qualia. There are many related views that for the purpose of this article should be good enough approximations: panpsychism, panexperientialism, neutral monism, Russellian monism, etc. Basically, this view takes seriously both the equations of physics and the idea that what they describe is the behavior of qualia. A big advantage of this view is that there is a matter-of-fact about what a system is composed of. Indeed, both in relativity and quantum mechanics, the underlying nature of a system is frame-invariant, such that its fundamental (intrinsic and causal) properties do not depend on one’s frame of reference. In order to obtain v(s|i) we will need to obtain this frame-invariant description of what the system is in a given state. Thus, we need a function that takes as input physical measurements of the system and returns the best possible approximation to what is actually going on under the hood, Ph(s). And only with this function Ph(s) we would be ready to compute the valence of the system. Now, in practice we might not need a plank-length description of the system, since the mathematical property that describes it’s valence might turn out to be well-approximated with high-level features of it.

The main problem with Non-Materialist Physicalism comes when one considers systems that have similar efficiency metrics, are performing the same algorithms, and look the same in all of the relevant respects from a third-person point, and yet do not have the same experience. In brief: if physical rather than functional aspects of systems map to conscious experiences, it seems likely that we could find two systems that do the same (input-output mapping), do it in the same way (algorithms), and yet one is conscious and the other isn’t.

This kind of scenario is what has pushed many to conclude that functionalism is the only viable alternative, since at this point consciousness would seem epiphenomenal (e.g. Zombies Redacted). And indeed, if this was the case, it would seem to be a mere matter of chance that our brains are implemented with the right stuff to be conscious, since the nature of such stuff is not essential to the algorithms that actually end up processing the information. You cannot speak to stuff, but you can speak to an algorithm. So how do we even know we have the right stuff to be conscious?

The way to respond to this very valid criticism is for Non-Materialist Physicalism to postulate that bound states of consciousness have computational properties. In brief, epiphenomenalism cannot be true. But this does not rule out Non-Materialist Physicalism for the simple reason that the quality of states of consciousness might be involved in processing information. Enter…

The Computational Properties of Consciousness

Let’s leave behaviorism behind for the time being. In what ways do functionalism and non-materialist physicalism differ in the context of information processing? In the former, consciousness is nothing other than certain kinds of information processing, whereas in the latter conscious states can be used for information processing. An example of this falls out of taking David Pearce’s theory of consciousness seriously. In his account, the phenomenal binding problem (i.e. “if we are made of atoms, how come our experience contains many pieces of information at once?”, see: The Combination Problem for Panpsychism) is solved via quantum coherence. Thus, a given moment of consciousness is a definite physical system that works as a unit. Conscious states are ontologically unitary, and not merely functionally unitary.

If this is the case, there would be a good reason for evolution to recruit conscious states to process information. Simply put, given a set of constraints, using quantum coherence might be the most efficient way to solve some computational problems. Thus, evolution might have stumbled upon a computational jackpot by creating neurons whose (extremely) fleeting quantum coherence could be used to solve constraint satisfaction problems in ways that would be more energetically expensive to do otherwise. In turn, over many millions of years, brains got really good at using consciousness in order to efficiently process information. It is thus not an accident that we are conscious, that our conscious experiences are unitary, that our world-simulations use a wide range of qualia varieties, and so on. All of these seemingly random, seemingly epiphenomenal, aspects of our existence happen to be computationally advantageous. Just as using quantum computing for factorizing prime numbers, or for solving problems amenable to annealing might give quantum computers a computational edge over their non-quantum counterparts, so is using bound conscious experiences helpful to outcompete non-sentient animals.

Of course, there is yet no evidence of macroscopic decoherence and the brain is too hot anyway, so on the face of it Pearce’s theory seems exceedingly unlikely. But its explanatory power should not be dismissed out of hand, and the fact that it makes empirically testable predictions is noteworthy (how often do consciousness theorists make precise predictions to falsify their theories?).

Whether it is via quantum coherence, entanglement, invariants of the gauge field, or any other deep physical property of reality, non-materialist physicalism can avert the spectre of epiphenomenalism by postulating that the relevant properties of matter that make us conscious are precisely those that give our brains a computational edge (relative to what evolution was able to find in the vicinity of the fitness landscape explored in our history).

Will Pure Replicators Use Valence Gradients at All?

Whether we work under the assumption of functionalism or non-materialist physicalism, we already know that our genes found happiness and suffering to be evolutionary advantageous. So we know that there is at least a set of constraints, efficiency metrics, and input-output mappings that make both phenomenal pleasure and pain very good algorithms (functionalism) or physical implementations (non-materialist physicalism). But will the parameters necessitated by replicators in the long-term future have these properties? Remember that evolution was only able to explore a restricted state-space of possible brain implementations delimited by the pre-existing gene pool (and the behavioral requirements provided by the environment). So, in one extreme case, it may be the case that a fully optimized brain simply does not need consciousness to solve problems. And in another extreme, it may turn out that consciousness is extraordinarily more powerful when used in an optimal way. Would this be good or bad?

What’s the best case scenario? Well, the absolute best possible case is a case so optimistic and incredibly lucky that if it turned out to be true, it would probably make me believe in a benevolent God (or Simulation). This is the case where it turns out that only positive valence gradients are computationally superior to every other alternative given a set of constraints, input-output mappings, and arbitrary efficiency functions. In this case, the most powerful pure replicators, despite their lack of altruism, will nonetheless be pumping out massive amounts of systems that produce unspeakable levels of bliss. It’s as if the very nature of this universe is blissful… we simply happen to suffer because we are stuck in a tiny wrinkle at the foothills of the optimization process of evolution.

In the extreme opposite case, it turns out that only negative valence gradients offer strict computational benefits under heavy optimization. This would be Hell. Or at least, it would tend towards Hell in the long run. If this happens to be the universe we live in, let’s all agree to either conspire to prevent evolution from moving on, or figure out the way to turn it off. In the long term, we’d expect every being alive (or AI, upload, etc.) to be a zombie or a piece of dolorium. Not a fun idea.

In practice, it’s much more likely that both positive and negative valence gradients will be of some use in some contexts. Figuring out exactly which contexts these are might be both extremely important, and also extremely dangerous. In particular, finding out in advance which computational tasks make positive valence gradients a superior alternative to other methods of doing the relevant computations would inform us about the sorts of cultures, societies, religions, and technologies that we should be promoting in order to give this a push in the right direction (and hopefully out-run the environments that would make negative valence gradients adaptive).

Unless we create a Singleton early on, it’s likely that by default all future entities in the long-term future will be non-altruistic pure replicators. But it is also possible that there are multiple attractors (i.e. evolutionarily stable ecosystems) in which different computational properties of consciousness are adaptive. Thus the case for pushing our evolutionary history in the right direction right now before we give up.

 Coming Next: The Hierarchy of Cooperators

Now that we covered the four worldviews, formalized what it means to be a pure replicator, and analyzed the possible future outcomes based on the computational properties of consciousness (and of valence gradients in particular), we are ready to face the game of reality in its own terms.

Team Consciousness, we need to to get our act together. We need a systematic worldview, availability of states of consciousness, set of beliefs and practices to help us prevent pure replicator takeovers.

But we cannot do this as long as we are in the dark about the sorts of entities, both consciousness-focused and pure replicators, who are likely to arise in the future in response to the selection pressures that cultural and technological change are likely to produce. In Part II of The Universal Plot we will address this and more. Stay tuned…

 



* Rather, they usually claim that, given that a Chinese Room is implemented with physical material from this universe and subject to the typical constraints of this world, it is extremely unlikely that a universe-sized look-up table would be producing the output. Hence, the algorithms that are producing the output are probably highly complex and using information processing with human-like linguistic representations, which means that, by all means, the Chinese Room it very likely understanding what it is outputting.


** Related Work:

Here is a list of literature that points in the direction of Consciousness vs. Pure Replicators. There are countless more worthwhile references, but I think that these ones are about the best:

The Biointelligence Explosion (David Pearce), Meditations on Moloch (Scott Alexander), What is a Singleton? (Nick Bostrom), Coherent Extrapolated Volition (Eliezer Yudkowsky), Simulations of God (John Lilly), Meaningness (David Chapman), The Selfish Gene (Richard Dawkins), Darwin’s Dangerous Idea (Daniel Dennett), Prometheus Rising (R. A. Wilson).

Additionally, here are some further references that address important aspects of this worlview, although they are not explicitly trying to arrive at a big picture view of the whole thing:

Neurons Gone Wild (Kevin Simler), The Age of EM (Robin Hanson), The Mating Mind (Geoffrey Miller), Joyous Cosmology (Alan Watts), The Ego Tunnel (Thomas Metzinger), The Orthogonality Thesis (Stuart Armstrong)

 

Avoid Runaway Signaling in Effective Altruism

 

Above: “Virtue Signaling” by Geoffrey Miller. This presentation was given at EAGlobal 2016 at the Berkeley campus.

For a good introduction to the EA movement, we suggest this amazing essay written by Scott Alexander from SlateStarCodex, which talks about his experience at EAGlobal 2017 in San Francisco (note: we were there too, and the essay briefly discusses our encounter with him).

We have previously discussed why valence research is so important to EA. In brief, we argue that in order to minimize suffering we need to actually unpack what it means for an experience to have low valence (ie. to feel bad). Unfortunately, modern affective neuroscience does not have a full answer to this question, but we believe that the approach that we use- at the Qualia Research Institute- has the potential to actually uncover the underlying equation for valence. We deeply support the EA cause and we think that it can only benefit from foundational consciousness research.

We’ve already covered some of the work by Geoffrey Miller (see this, this, and this). His sexual selection framework for understanding psychological traits is highly illuminating, and we believe that it will, ultimately, be a crucial piece of the puzzle of valence as well.

We think that in this video Geoffrey is making some key points about how society may perceive EAs which are very important to keep in mind as the movement grows. Here is a partial transcript of the video that we think anyone interested in EA should read (it covers 11:19-20:03):

So, I’m gonna run through the different traits that I think are the most relevant to EA issues. One is low intelligence versus high intelligence. This is a remarkably high intelligence crowd. And that’s good in lots of ways. Like you can analyze complex things better. A problem comes when you try to communicate findings to the people in the middle of the bell curve or even to the lower end. Those folks are the ones who are susceptible to buying books like “Homeopathic Care for Cats and Dogs” which is not evidence-based (your cat will die). Or giving to “Guide Dogs for the Blind”. And if you think “I’m going to explain my ethical system through Bayesian rationality” you might impress people, you might signal high IQ, but you might not convince them.

I think there is a particular danger of “runaway IQ-signaling” in EA. I’m relatively new to EA, I’m totally on board with what this community is doing, I think it’s awesome, it’s terrific… I’m very concerned that it doesn’t go the same path I’ve seen many other fields go, which is: when you have bright people, they start competing for status on the basis of brightness, rather than on the basis of actual contributions to the field.

IQ

So if you have elitist credentialism, like if your first question is “where did you go to school?”. Or “I take more Provigil than you, so I’m on a nootropics arms race”. Or you have exclusionary jargon that nobody can understand without Googling it. Or you’re skeptical about everything equally, because skepticism seems like a high IQ thing to do. Or you fetishize counter-intuitive arguments and results. These are problems. If your idea of a Trolley Problem involves twelve different tracks, then you’re probably IQ signaling.

runnaway_IQ_signaling

A key Big Five personality trait to worry about, or to think about consciously, is openness to experience. Low openness tends to be associated with drinking alcohol, voting Trump, giving to ineffective charities, standing for traditional family values, and being sexually inhibited. High openness to experience tends to be associated with, well, “I take psychedelics”, or “I’m libertarian”, or “I give to SCI”, or “I’m polyamorous”, or “casual sex is awesome”.

openness

Now, it’s weird that all these things come in a package (left), and that all these things come in a package (right), but that empirically seems to be the case.

openness_2Now, one issue here is that high openness is great- I’m highly open, and most of you guys are too- but what we don’t want to do is, try to sell people all the package and say “you can’t be EA unless you are politically liberal”, or “unless you are a Globalist”, or “unless you support unlimited immigration”, or “unless you support BDSM”, or “transhumanism”, or whatever… right, you can get into runaway openness signaling like the Social Justice Warriors do, and that can be quite counter-productive in terms of how your field operates and how it appears to others. If you are using rhetoric that just reactively disses all of these things [low openness attributes], be aware that you will alienate a lot of people with low openness. And you will alienate a lot of conservative business folks who have a lot of money who could be helpful.

Another trait is agreeableness. Kind of… kindness, and empathy, and sympathy. So low agreeableness- and this is the trait with the biggest sex difference on average, men are lower on agreeableness than women. Why? Because we did a bit more hunting, and stabbing each other, and eating meat. And high A tends to be more “cuddle parties”, and “voting for Clinton”, and “eating Tofu”, and “affirmative consent rather than Fifty Shades”. 

agreeableness

EA is a little bit weird because this community, from my observations, combines certain elements of high agreeableness- obviously, you guys care passionately about sentient welfare across enormous spans of time and space. But it also tends to come across, potentially, as low agreeableness, and that could be a problem. If you analyze ethical and welfare problems using just cold rationality, or you emphasize rationality- because you are mostly IQ signaling- it comes across to everyone outside EA as low agreeableness. As borderline sociopathic. Because traditional ethics and morality, and charity, is about warm heartedness, not about actually analyzing problems. So just be aware: this is a key personality trait that we have to be really careful about how we signal it. 

agreeableness_3

High agreeableness tends to be things like traditional charity, where you have a deontological perspective, sacred moral rules, sentimental anecdotes, “we’re helping people with this well on Africa that spins around, children push on it, awesome… whatever”. You focus on vulnerable cuteness, like charismatic megaphone if you are doing animal welfare. You focus on in-group loyalty, like “let’s help Americans before we help Africa”. That’s not very effective, but it’s highly compelling… emotionally… to most people, as a signal. And the stuff that EA tends to do, all of this: facing tough trade-offs, doing expected utility calculations, focusing on abstract sentience rather than cuteness… that can come across as quite cold-hearted.

agreeableness_2

EA so far, in my view- I haven’t run personality questionnaires on all of you, but my impression is- it tends to attract a fairly narrow range of cognitive and personality types. Obviously high IQ, probably the upper 5% of the bell curve. Very high openness, I doubt there are many Trump supporters here. I don’t know. Probably not. [Audience member: “raise your hands”. Laughs. Someone raises hands]. Uh oh, a lynching on the Berkeley campus. And in a way there might be a little bit of low agreeableness, combined with abstract concern for sentient welfare. It takes a certain kind of lack of agreeableness to even think in complex rational ways about welfare. And of course there is a fairly high proportion of nerds and geeks- i.e. Asperger’s syndrome- me as much as anybody else out here, with a focus on what Simon Baron-Cohen calls “systematizing” over “empathizing”. So if you think systematically, and you like making lists, and doing rational expected value calculations, that tends to be a kind of Aspie way to approaching things. The result is, if you make systematizing arguments, you will come across as Aspie, and that can be good or bad depending on the social context. If you do a hard-headed, or cold-hearted analysis of suffering, that also tends to signal so-called dark triad traits-narcissism, Machiavellianism, and sociopathy- and I know this is a problem socially, and sexually, for some EAs that I know! That they come across to others as narcissistic, Machiavellian, or sociopathic, even though they are actually doing more good in the world than the high agreeableness folks. 

explanatory_power_of_virtue_signaling

[Thus] I think virtue signaling helps explain why EA is prone to runaway signaling of intelligence and openness. So if you include a lot more math than you really strictly need to, or more intricate arguments, or more mind-bending counterfactuals, that might be more about signaling your own IQ than solving relevant problems. I think it can also explain, according to the last few slides, why EA concerns about tractability, globalism, and problem neglectedness can seem so weird, cold, and unappealing to many people.

explanatory_power_of_virtue_signaling_1

 

Burning Man

[Content Warning: Deals with heavy topics including gruesome deaths, fear of the multiverse, bad trips, possible meme hazards, and psychotic delusions. Epistemic Status: Confident in about half of the content; the rest is extremely speculative. Everything in this text is subject to heavy revision upon learning more information. I wrote this in a haste right after Burning Man before my state-specific memory access went away. Please take this writeup with a giant grain of salt]

Burning Man

This is the first year that I attended Burning Man. I do not claim to be a Burning Man expert. I’m just a consciousness researcher who happened to attend the Burn and found the experience amazing and insightful. So much so that that writing 13,500+ words about it seemed appropriate. Here goes nothing.

Introduction

I arrived on the morning of the first day (Sunday the 27th of August) and left on Monday (4th of September). I intellectually know that I only spent eight full nights and seven full days at the Playa, but my visceral feeling of time refuses to acknowledge this fact. Like a heavy acid trip, at Burning Man time expands beyond recognition. The experience maxes out one’s novelty detection mechanisms (latent inhibition be damned) and leads you to conclude that a lifetime has happened. Before my brain readjusts to consensus reality, here goes my candid impressions about the event and the insights that came together during it. As it turns out, I think that Burning Man is a profoundly significant event with far-reaching implications. While from afar it is easy to dismiss it as a mere techie-filled psychedelic-fueled hedonistic festival, the truth is that Burning Man may be one of the few key outlets in the world for the exploration of potential futures that are truly worth living. I.e. Post-Darwinian societies. More on this later.

Strong Emergence

It is notoriously hard to boil down the experience into just a few take-aways (example). Burning Man does not lend itself to dimensionality reduction; merely talking about the mental forces that make up the memetic constituents of the population of Black Rock City (predominantly: artists, spiritual practitioners, scientists, environmentalists, techies, philosophers, and qualia lovers) would be akin to describing a biological plant merely in terms of the atomic elements found within it. It’s true that if you grind it down to a fine powder, vaporize it (to break down its proteins and molecules), and then analyze such vapor with X-ray spectroscopy you will characterize the percentage of carbon, nitrogen, potassium, etc. atoms in it. And while this is a necessary part of a full description of such a plant, the elemental breakdown of its composition just scratches the surface of what the plant truly is. This is analogous to the Burn, for Burning Man’s most interesting aspects, like those of a living organism, are to be found at high levels of emergence. In the case of biological organisms we are talking about the large scale assemblies of biomolecules (themselves already complex) implementing elaborate interdependent metabolic functions working together to bring about finely tuned adaptive behavior. Oftentimes, biological organisms utilize the properties of basement reality (i.e. quantum fields) to implement functions that would have formerly been described as strongly emergent (i.e. as metaphysically supervening properties bigger than the mere sum of their parts), as is currently studied by the budding field of quantum biology. At Burning Man something akin to this may be going on as well: you find that people, emotions, and memes come together to create pods, camps, and happenings that are best described as energetic contingents of collective states of consciousness, all of which turn out to have mind-boggling emergent properties unavailable without the high levels of trust, openness, creativity, and coherence beneath the surface. Thus the futility of describing it in terms of what goes into it. Better to address the resulting (emergent) phenomena. More on this later.

The People

According to the 2016 Burning Man Census the number one reason that Burners selected as the source of wonderful memories at Burning Man was the people. I personally found this to be very much the case. Although from afar one may think that BM attendees are largely psychedelic junkies, misguided hippies, and sentimental environmentalists, the truth is that the people in the Playa are extraordinary in multiple ways. It almost feels as if the art, the music, the workshops, and the principles are not the core attraction. Rather, these elements are merely an excuse to bring together amazing people who have a high probability of having deeply meaningful interactions and developing symbiotic relationships with each other for the betterment of humanity.

it_s_the_people

It’s about the people! (source)

Burners are highly educated. According to the Educational Attainment in the United States Wikipedia article, 36% of Americans between 25 and 34 years old have a bachelor’s degree or above (32% for those between 45 to 64, and 27% for those 65 and above), compared to 74.5% of the 2016 Burning Man attendees (of all ages). Additionally, 31.3% of them had a graduate degree, which is an insanely high figure when compared to the national baserate (11% for Americans above the age of 25). More so, this number has been steadily growing over the last few years. In other words, for what seems like an arts and crafts festival, this was an exceptionally well educated crowd. And yet, education is only scratching the surface of what makes these people interesting.

education

The Educational Attainment of Burners

I have attended academic conferences, rationalist meetups, meditation gatherings, psychedelic festivals, and even amazing events like Psychedelic ScienceEffective Altruism Global, and The Science of Consciousness. The people I meet at these events often impress me in many ways, and talking to them has reinforced my conviction that humanity is indeed capable of bringing about a marvelous world free from unnecessary suffering. In light of these previous experiences I certainly did not anticipate being surprised by the people at Burning Man. I was wrong. While it’s true that not everyone at Burning Man is exceptional (“we are all unique, but not everyone is uniquely unique”), the base rate of people who deeply impressed me was possibly higher than at any other gathering of people I’ve ever been to. The consistent feeling I got was one of people who actually cared.

Here is a little project I’d love to see carried out: someone should take the time to conduct a cluster analysis of the people attending Burning Man using features such as their beliefs about reality, their lifestyle, their preferred social circles, etc. Simply based on my experience, I’d say that the main clusters featured would be: Spiritually serious people with thousands of hours of practice under their belt (50% of Burners describe themselves as “spiritual but not religious”), career ecologists who are looking for ways to live without leaving a footprint on the planet (“leave no trace”), social workers, programmers & rationalists, high grade hedonists, psychologists, and philosophical seekers.

I find that one of the most powerful aspects of Burning Man is that its participants were mostly open, ready, and willing to have their minds changed. Sure, we are all attached to our preexisting views about reality, and it’s always painful to let go of them. But the vibe of the place, perhaps through a combination of personality types, empathogenic and psychedelic drugs, and free-floating love made it seem ok to let one’s deeply held beliefs cross-pollinate with those of others. Whether this was because of the high degree of openness to experience, relatively high conscientiousness (merely packing for the whole trip selects out people who can’t be bothered), typically high intelligence, or solid pro-sociality (disagreeable people are unlikely to get a kick out of the concept of a gifting economy), it doesn’t matter. People I talked to were not engaging with ideas in a superficial way. They deeply engaged with them. They looked you in the eye, told you their deepest worries about reality, and expressed their beliefs with the underlying feeling of being together in this mess, so let’s work together to bootstrap our way out of it.

Ok, I may be exaggerating a little here. Perhaps Burning Man is somewhat like Silicon Valley: it works more as a mirror of who you are than a solid thing that everyone will perceive in the same way. If you are a low-grade hedonist just looking to get drunk and make fun of others for taking Burning Man seriously you will naturally gravitate towards the camps where that’s the whole point, and if you are an income-focused techie merely looking to have a relaxing little vacation you will easily find yourself doing exactly that. But the point still stands that if you are a serious seeker looking for radically new ways of conceiving the nature of reality for the betterment of universal consciousness… there will be plenty of outlets, people, memes, artworks, and workshops for you to do exactly that at Burning Man. And oh man, are these things of high quality!

One of the wonderful persons I met at the Burn was Bruce Damer, with whom I had the pleasure to talk about physics, computing, the origin of life, consciousness, and psychedelics. He shared with me an interesting way of looking at life that involves a tripartite feedback loop: Life utilizes a “probability enhancing engine” (such as the interior of a cell boundary, where the probability of chemical reactions increases dramatically), a place to accumulate such changes as they happen (in which the reactions can be sustained), and a memory system (such as DNA, in which information about the self-replicating reactions can be stored and repurposed). Burning Man, in light of this model, is perhaps one of the leading sources of genuine memetic novelty in the world. With its very high density of people who are deliberate about their choices in life, BM works as a probability enhancing engine which drastically increases the chances for people to find others who are at their own level and are ready to collaborate at the same degree of commitment. The collective interpersonal temperature increases the probability for great matches to be found, and the high (socially derived) hedonic tone fosters no attachment towards each of the attempts that don’t work out. On any given night enough people trip or take an empathogen that there is a general (real or imagined) contact high state akin to a blend of empathogenesis and entheogenesis, i.e. ego softening and ego dissolving vibes, respectively. Higher probability of pairs maximally benefiting from each other to meet and collaborate on future projects. At least this describes my experience. (Be on the lookout for new collaborative projects between Qualia Computing and major institutions in the near future – this is just a teaser for now).

A handful of people I’ve never met recognized me at the Playa. Apparently the Psychedelic Cryptography article reached enough people to make Qualia Computing and the Qualia Research Institute not the schizophrenic word salad they may sound at first, but a player in the emerging memetic ecosystem at the foothills of the psychedelic renaissance. For example, on the night of the Burn I was hanging out next to a cucumber water stand in Esplanade and a guy approached me and asked: “This is going to sound strange, but, are you by any chance Andrés? From Qualia Computing?” I answered “yes”, and then we proceeded to talk about DiPT, the blockchain, meditation-based cryptocurrency, Greg Egan, how John C. Lilly didn’t go far enough, and the Hedonistic Imperative. This was not by any means an unusual type of interaction in this context, and especially not at 3:30 in the morning (when you find the highest probability for magical encounters to take place).

enjoymentAll of this goes to show that Burning Man is full of people capable of engaging with very high level ideas in a meaningful way. To be perfectly honest with you, I must confess that my model of the world is that only about 1% of people have any philosophical agency whatsoever. I do not resent this fact, because with the proper qualia they could turn themselves around right away. People experience philosophy through the eyes of learned helplessness. But at Burning Man (this year; my guess every year) the percentage of people with philosophical agency might have been as high as 10-15%, which is about as high as I have found it to be at places like EAGlobal and the rationalist community. I.e. a pretty freaking extraordinary ratio. Likewise, scientific, introspective, and spiritual literacy seemed to be through the roof. And even those who were not philosophically literate to begin with seemed extremely pleased to learn about qualia. I lost count of the number of people who were thrilled (THRILLED I tell you) to learn that the word qualia existed and that it referred to the ineffable subjective character of sensations, like the blueness of blue. “You mean that there is a word for that?! Wow! I’m so happy now! Cheers to that!” was a rather typical reaction in this context. This warmed my heart. I love turning on people to the concept of qualia.

It is also worth pointing out that a pervasive underlying vibe in the Burn was that of a high trust society. Research shows that societies in which people believe that others around them have only the best intentions tend to have a lot of great positive outcomes. The social dynamics at Burning Man run on high trust, and one can feel this in the air (along with a bunch of dust). Not only do the attendees seem to think of humans very highly (relative to the average person), but they also tend to think of other Burners in an even higher light: “To What Extent Do you Assume that People Have Only the Best Intentions?” (2016):

high_trust_society

Black Rock City as a very High Trust Society

Metaphysics

Before I go on with further object-level analysis of the Burn, let me pause for a second and make an overall point concerning the metaphysical nature of the universe: Metaphysics matter. Look, if Buddhist metaphysics are roughly correct (e.g. emptiness, karma, the reality of suffering, absence of omnipotent gods, reincarnation, etc.) then engaging in profoundly disturbing practices full of negative side effects such as Vipassanā might be very much worth the trouble. Sure, in this lifetime you will be exposed to deeply unsettling experiences, a multi-year long dark night of the soul, serious psychosomatic pain, meditation-induced depersonalization, insomnia, ADHD, etc. but in the grand scheme of things your current pain will be worth it. This lifetime’s suffering would be a good price to pay to attain Bodhisattva status and then go on to help quintillions of beings throughout your endless reincarnations to come. On the other hand, if karma is simply what it feels like to have an evolved in-built system to keep track of your social standing and nothing carries over after death, then Vipassanā might simply involve too much suffering to be worth it. In fact, it might even be an outright stupid and unethical activity, and talking about it in a way that produces curiosity and fear of missing out in others is doing them a disservice (for it would be a memetic hazard). You would be much better off focusing instead on cost-effective high-tech Jainismvalence technologies, and the upcoming reproductive revolution.

The same goes for other metaphysical topics such as philosophy of personal identity, the fundamental nature of bliss, mind-body problem, causality, existence of alternate branches of the multiverse, the badness of suffering, etc. What the nature of reality may turn out to be profoundly influences what it means to be a good person and what it is that we ought to do to maximize goodness and minimize suffering. Not many people seem to get this, though. For too many individuals the trauma they experienced as a result of early life exposure to manipulative religious memes, and the intuitively-felt futility of philosophy, lead to the calcification of their philosophical background assumptions (which are rarely recognized as such). But as David Pearce says: “The penalty of _not _ doing philosophy isn’t to transcend it, but simply to give bad philosophical arguments a free pass.”

Now, talking about metaphysics and David Pearce: for a wide variety of reasons I assign the bulk of my probability mass to his metaphysics (note: I also share his ethical views). I am not going to try to justify why I think he is probably right at the moment, for it would take many thousands of words*. For now it will suffice to say that I find David’s views to be the most informed, coherent, well thought out, and explanatory of all of the interpretations of reality I’ve ever been acquainted with. In rough form, here are the highlights of such a view (taken from here): (0) Zero Ontology: The universe exists as a side effect of the total and complete absence of information. (1) Events of conscious experience are ontologically unitary: The left and right side of your visual field are part of an integrated whole that stands as a natural unit. (2) Physicalism: Physics is causally closed and it fully describes the behavior of the observable universe. (3) Wavefunction realism: The decoherence program is the most parsimonious, scientific, and promising approach for interpreting quantum mechanics. (4) Mereological Nihilism (also called Compositional Nihilism): Simply putting two objects A and B side by side will not make a new object “AB” appear ex nihilo. (5) Qualia Realism: The various textures of qualia (phenomenal color, sounds, feelings of cold and heat, etc.) are not mere representations. On the contrary, our mind uses them to instantiate representations (this is an important difference). (6) Causal efficacy: Consciousness is not standing idly by. It has definite causal effects in animals. In particular, there must be a causal pathway that allows us to discuss its existence. (7) Qualia computing: The reason consciousness was recruited by natural selection is computational. In spite of its expensive caloric cost, consciousness improves the performance of fitness-relevant information processing tasks.

Together, all of these metaphysical points paint a coherent worldview that’s fully compatible with most (but not all) of the evidence at hand. Sadly, it’s also a very grim picture of reality: The multiverse is extremely large, eternal, interconnected, and full of suffering that will simply never go away. Worse, every moment of experience is permanently stuck in its own spatiotemporal coordinates (or rather, whatever post-Everettian foliation-based generalization of relativistic coordinate systems admit the formalisms of physics). But if it’s true, we had better know about it, for there are serious ethical policy implications to Pearcianism.

Most philosophies (and theodicies) may be thought of as exercises in motivated reasoning (“how can I think of reality in order to make sense of the facts while keeping it as meaningful as possible?”). Yet Pearce’s metaphysics is anything but. It’s sheer eternal terror dimly tamed by a glimmer of hope found in a handful of branches of the multiverse (where the Hedonistic Imperative is implemented, and the biology of suffering effectively rooted out of a tiny subset of the existent forward light cones). Indeed I can confidently say that the worst state of consciousness I’ve ever felt took place the first time my mind fully grasped Pearcean metaphysics and considered it to be the final answer. Thankfully I’ve learned to remain open-minded and agnostic about the ultimate nature of reality no matter how compelling a view may be; keeping a probabilistic distribution over metaphysical views is perhaps a lot healthier (and more rational) than committing to any one of them as if true. Do not let your mind get crystallized; do not ever believe in your own bullshit or you will have a self-induced bad trip. And yet, I do believe that it is my responsibility to act in accordance to what seems to be the most probable model of existence. If Pearce is right, I’d like to be able to know that and be ok with it, act in accordance with it, and thus prevent as much suffering as is (post)humanly possible. Saints and Bodhisattvas are not supposed to engage in wishful thinking, and neither are 21st century effective altruists. Kudos to people like Brian Tomasik, who are not afraid to bite the bullet of their metaphysics and dedicate themselves fully to reduce suffering based on what they think is true. Do not ever bury your head in the sand. The stakes are too high. But also, beware of multiverse mania (severely paralyzing people who settle on an Everettian picture of the universe leading them to lose their capacity to be productive and helpful).

Now, what on earth does any of this have to do with Burning Man? A whole lot, I would argue. As I experienced it, Burning Man is an experiment in metaphysics. It’s an attempt to get awesome people from all walks of life to be open to each other’s life learnings and deep intuitions in order to transcend our current suffering-producing philosophical paradigms.

The Strong Tlön Hypothesis

Based on my conversations with people at the Playa, the most popular metaphysical interpretation of reality seemed to be what I call the Strong Tlön Hypothesis (STH for short). Skeptical scientific materialism was perhaps in second place, followed by generalized agnosticism (again, a wise choice given the psychological dangers of settling for a painful worldview). So what is this Strong Tlön Hypothesis? Tlön, Uqbar, Orbis Tertiu is a wonderful short story by Jorge Luis Borges about strong idealism. This view is one in which reality presents itself as a physical universe (consensus reality) merely as a consequence of a collective delusion. The belief state of us as a collective group mind (itself the manifested imagination of the one eternal being) is what sets the fundamental parameters of reality. In other words, the laws of physics work out to guide the causal structure of reality simply because we believe in them. But if everyone chose to believe otherwise (perhaps not a simple feat to achieve), the nature of reality would in fact completely change. Suffering and separation in this view are the result of a tragedy of the commons, and not a brute fact about existence. Thus, by thinking about new metaphysical interpretations of reality, making sense of them, giving them life with imagination and will, we would literally transform reality one thought at a time. Creation through imagination would be the underlying engine of reality; everything else is maya (metaphysical illusion).

On Sunday and Monday night I walked up to strangers and asked them “what do you think about consciousness?” The most common answer I received involved something akin to the Strong Tlön Hypothesis indeed, where Burners literally claimed that yes, if we all took psychedelics more seriously and decided to grow up spiritually all at once, we would all enter into a new stage in our cosmic evolution. Perhaps our current level of reality is what we need right now: A collective illusion created by us and God to allow us to deeply and fully grasp why this system fails. Until we internalize the problems with our current pursuits we will not be able to advance. We need to experience many lifetimes and have many experiences as a collective consciousness in this pseudo-Darwinian world in order to finally realize the problems with this system of belief. Only when we understand the intrinsic flaws of our current consensus reality will we be ready to move on to the next stage. Till then, it’s an uphill battle of waking up at a personal level and then deciding to help convince those around us that we have the power to change reality (and we need a threshold number of people to go along with this belief to have the capacity to structurally alter the bedrock of reality). Every life-form contains the universal Logos within. The God Force, so to speak, is within us all, gradually refining the structure of our mind to make us more and more God-like throughout the eons (or maybe that as well is a collective illusion, courtesy again of the Strong Tlön Hypothesis). The STH view would explain the power of psychedelic trips, the unsettling feelings of synchronicity, and the causal influence of imaginary archetypes. Indeed, it may even explain the Mandela Effect.

“There is no reality until that far-off day when we rejoin the Godhead. Everything else is just a momentary tool, a momentary experience we create in this somewhat desperate attempt to grasp God.” – Bob Sanders, youtube medium

Now, Strong Tlön may be too far out. Believing in it may be a sign of latent insanity (anecdotally it seems to be surprisingly common among the people with schizophrenia I know). I personally do not assign much probability mass to it, but I have yet to discard it fully. That said, I still think there is a crucial benefit to engaging with it: most of the time our worldviews are over-constrained rather than under-constrained. While the STH may be false as it is (quantum mechanics will remain true no matter what we collectively think about physics) letting your brain wonder “what if” can be a helpful exercise in weakening latent inhibition and softening unhelpful constraints that are keeping you at a local maximum of understanding.

Nick Land’s mesmerizing story Lemurian Time War discusses the concept of hyperstition, i.e. fictions that make themselves real:

In the hyperstitional model Kaye outlined, fiction is not opposed to the real. Rather, reality is understood to be composed of fictions – consistent semiotic terrains that condition perceptual, affective and behaviorial responses. Kaye considered Burroughs’ work to be ‘exemplary of hyperstitional practice’. Burroughs construed writing – and art in general – not aesthetically, but functionally, – that is to say, magically, with magic defined as the use of signs to produce changes in reality.

[…]

According to Kaye, the metaphysics of Burroughs’s ‘clearly hyperstitional’ fictions can be starkly contrasted with those at work in postmodernism. For postmodernists, the distinction between real and unreal is not substantive or is held not to matter, whereas for practitioners of hyperstition, differentiating between ‘degrees of realization’ is crucial. The hyperstitional process of entities ‘making themselves real’ is precisely a passage, a transformation, in which potentials – already-active virtualities – realize themselves. Writing operates not as a passive representation but as an active agent of transformation and a gateway through which entities can emerge. ‘[B]y writing a universe, the writer makes such a universe possible.’ (WV 321)

Lemurian Time War

I would argue that while the STH is probably false, at least a weak version of it is definitely true: thanks to phenomenal binding (the weird property of qualia that enables us to be more than mere mind-dust, i.e. to bring together myriad qualia values such as the blueness of blue and the smell of cinnamon into complex multi-modal information-rich experiences) ideas are in fact more than the mere sum of their parts. More so, thanks to the causal efficacy of consciousness, ideas can change the world. I call this the Weak Tlön Hypothesis. Namely, that the fictions that we can imagine have, indeed, hyperstitional power.

Incredibly, John C. Lilly and David Pearce are very much alike in one respect: They both share a complete commitment to understanding the nature of reality, wherever the path may take them, whether the truth is ugly, terrible, or requires them to revise deeply rooted background assumptions (an often painful process). Their core difference is, I would argue, that Pearce buys into the Weak Tlön Hypothesis whereas Lilly bought into the Strong version.

Three Views of Personal Identities: Heavens and Hells

One of the metaphysical views that has the highest level of hyperstitional power is one’s conception of personal identity. I.e. how we all choose to answer the question “who am I, really?” will have an extremely oversized effect on the unfolding of reality. Thus, it’s important that we get this right. In order to talk about this topic clearly, let’s utilize Daniel Kolak’s vocabulary concerning philosophy of personal identity, which divides the conceptions into three neatly clustered explanation spaces:

Closed Individualism (CI): is the view that “you start existing when you are born and you stop existing when you die”. Alternatively, the “soul view of identity” (in which you are an eternal being yet still ontologically separate from other beings) exists within the purview of Closed Individualism. Most people subscribe, whether implicitly or explicitly, to this view. On the positive side, buying into this view makes you feel ontologically special, unique, and justified in caring about yourself to the exclusion of others. On the negative side, this view is liable to make you feel separate, left-out, unrelatable, deeply afraid of death, and profoundly alone.

Empty Individualism (EI): This is the view that we exist merely as a time-slice of experience. Who you are is just whatever informational content is present in this very instantaneous moment of experience. Pearcean metaphysics is largely Empty Individualistic (plus it’s blended with Eternalism, i.e. the belief that every moment of experience exists tenselessly, and that the passage of time is an illusion). On the positive side, this view allows you to feel deeply relieved when you grasp Buddhist emptiness and detachment, it allows you to let go of the past, to be less worried about the future, and to feel free to enjoy the moment. On the negative side, this view can make you feel like you are stuck in time (like bugs in amber), experience depersonalization, get feelings of meaninglessness, and worry about being utterly separate from everything else. It also frequently makes you feel helpless and unmotivated, as you cannot ever possibly benefit from your current efforts (the one who does is another moment of experience).

Open Individualism (OI): This is the view that we are all the same universal consciousness. In this view we are all deeply connected; we are all the same eternal being in disguise. On the positive side, Open Individualism can relieve one’s fear of death, bring about a profound sense of cosmic significance, loosen up the fear of separation, and allow you to deeply buy into a rational sentience-based ethics (where we all care about each other as if they were ourselves… ’cause they are in this view). On the negative side, OI can make you feel an overwhelming sense of personal responsibility as one realizes that as long as any being in the multiverse is in an experiential hell you too are in there. Additionally, OI can make you feel even more lonely than the other views, for when one buys into this view 100% there’s a chance that a profound sense of existential loneliness may set in (God is ultimately alone, and sad about this fact). While people who experience the feeling of Universal Oneness of Open Individualism tend to report existential relief as a consequence (example), there is indeed a minority of people who react very poorly to this experience:

As for the experience of being assimilated into oneness, what we find is a profound loneliness. Our mind expects to find heaven and/or Nirvana. We do experience a profound freedom and infinity of being. But once we get over the profound freedom and ability to span time and place, we find there is no one else. We are totally alone. We are the Creator before Creation.

– Fear of ego annihilation and assimilation into oneness (source)

So each of these views has positive and negative psychological elements. For ease of understanding, here are these various views of personal identity in picture form:

For reasons we do not yet understand, Open Individualism tends to be remarkably common on LSD:

Today a young man on acid realized that all matter is merely energy condensed to a slow vibration, that we are all one consciousness experiencing itself subjectively, there is no such thing as death, life is only a dream, and we are the imagination of ourselves.

Bill Hicks, A Positive Drug Story

Two questions arise: How are one’s beliefs about personal identity implemented? And, why do they have associated good and bad feelings?

In a later article I will explore further various theories that may account for the feeling of oneness on psychedelics. Suffice to say that under qualia formalism both the feelings of oneness and separateness come from the properties of the mathematical object isomorphic to the phenomenology of one’s experience. In particular, the topology of such an object (and its orientability) may determine the degree to which one feels a self-other barrier. This is highly speculative, of course. Under the STH, though, “what one believes to be true is true” and thus how separate one feels is a matter of conscious choice.

With regards to the second question (“why is personal identity so tied with good and bad feelings?”), there are a couple of reasons why these beliefs might be so hedonically loaded (i.e. they have a tendency to make you feel good or bad, rather than being neutral thoughts). First, this could certainly be the Tyranny of the Intentional Object at work. That is, personal identity views are in fact completely neutral, but since they are explored within the human software they will happen to trigger social feelings (rejection, integration, love, care, etc.) as well as feelings related to death and mortality and it is those feelings that tend to be strongly linked with good or bad valence (i.e. the pleasure pain axis). This itself may be the case for purely evolutionary reasons. If so, given access to the genetic source code of one’s brain it may be possible to invert the valence of any thought whatsoever (ex. some people genuinely enjoy watching others suffer, cf. Schadenfreude, which suggests the hedonic tone of ideas is just a qualia association). Our mind’s hedonic gloss is strongly associative (someone having a bad smell might make you feel like what they are saying is dirty, etc. cf. thin/thick boundaries). David Pearce is likely to endorse this view, and the work I’m doing on Quantifying Bliss assumes that something like that is going on. In brief, if we could control our valence with technology that puts us in a constant and healthy MDMA-like state of consciousness then philosophy would never ever feel terrifying. As they say, “take care of happiness and the meaning of life will take care of itself”. This is what I call the valence interpretation of spirituality as opposed to the spiritual interpretation of valence (cf. The Most Important Philosophical Question).

And second, under the Strong Tlön Hypothesis, these feelings may be guiding us towards a better future. God is making sure that we explore all of the possible worldviews and deeply realize their ultimate limitations before we settle for a reality we are satisfied with creating for ourselves. It may even be the case that the only way to avoid trouble is to learn to never commit to any view completely. Any Theory of Everything (ToE) is perhaps a gamble with your own sanity. In the immortal words of John C. Lilly:

“For when it starts feeling like a prison in there—and it usually does for most people—you are confronted with the fact that the bars are of your own making.”
― John C. Lilly, The Deep Self: Consciousness Exploration in the Isolation Tank

If this is so, what I take from the limitations of all of these views is that we ought to explore further the state that exists in-between these various beliefs:

I call this the Goldilocks Zone of Oneness. Analogous to the planetary habitable zone (neither too close to a star and thus burning nor too far and thus freezing), there might be a psychologically tolerable range for how much you believe in universal oneness. That is, it’s best to feel neither completely merged nor completely separate. Close enough that one can relate to others and not feel separate, but not so close that one’s existence feels redundant and cosmic loneliness sets in. Incidentally, this seems to be roughly the place at which Burners see themselves relative to other humans (answer D being the mode):

Goldilocks_zone_of_oneness

Goldilocks Zone of Oneness

Given the current human cognitive implementation, the psychological state found inside this zone might be great to nurture and cultivate in order to improve our civilization. This is the region in which love, harmony, and gratitude can shine the brightest.

At the Burn I had a couple of extraordinarily positive experiences related to Oneness right at this Goldilocks Zone**:

Talking to God

There was an incredible art installation in Esplanade called “Talk to God” consisting of an old telephone booth (see pictures below). As soon as I saw it I thought to myself: “Why not? That looks interesting.” So I lined up at the booth. I was certainly not expecting much, and I must say that I was deeply impressed with whomever was on the other side of the phone. Here is my “conversation with God”, as best as I can recall it:

talk_to_god

Me: Hi God! This is Andrés. I wanted to ask you two questions that are bugging me quite a lot.
God: Hey Andrés! Sure, I’m happy to answer any question you may have.
Me: Well, first of all I wanted to talk to you about Solipsism and how it makes me feel. But before I get into that, I just wanted to confirm that we agree on the idea that we are all one consciousness. That we are all God, i.e. You! Is that true?
God: Yes, that’s very much the case. That said, different beings have access to different parts of the totality, so there’s also a sense in which there is a multiplicity of observers. But deep down we are all one. So what is your question?
Me: Thank you, that much I suspected. Here is my question: Most people report a profoundly positive feeling as a result of realizing that we are all one. This certainly happened to me about ten years ago. At first this experience was extremely elating, since it drastically reduced my fear of dying. But recently I have at times had a very peculiar experience in which I viscerally feel that the fact that we are all one consciousness is pretty tragic. It makes me feel deeply alone. Cosmic solipsism if you will. Do you have any thoughts on this?
God: Ah, yes. This can happen. But look, that’s an effect of projecting your human feelings of loneliness into the absolute. Trust me, the absolute is totally self-sufficient. There is no feeling of loneliness in it. I usually present the picture like this. Think of the universe as a gigantic cube. Say that in one of the corners (e.g. front bottom left) we have the beginning of time, where all of the timelines start. And at the opposite extreme (e.g. back top right) we have the end of time, where complete understanding is achieved. Every single timeline that truly exists in eternity makes its way from the starting corner to the ending one. There are countless other timelines that do not make it to the top, but these are terminated. Any timeline that does not eventually reach the point of perfect union with God and ultimate awakening is terminated, which means that a happy ending is guaranteed. Also, it is not a problem to terminate a timeline, for that means it was just a dream, not based on actual reality. I recommend checking out the works of David Deutsch and Stephen Hawking. They are not completely correct yet, but they are very much on the right track. dde71b5d481cc6391e72483a46cee981
Me: Thank you! That’s fascinating. I’ll need to think more about that. Now, on to the second question. I’ve been working on a theory concerning the nature of happiness. It’s an equation that takes brain states as measured with advanced brain imaging technology and delivers as an output a description of the overall valence (i.e. the pleasure-pain axis) of the mind associated to that brain. A lot of people seem very excited with this research, but there is also a minority of people for whom this is very unsettling. Namely, they tell me that reducing happiness to a mathematical equation would seem to destroy their sense of meaning. Do you have any thoughts on that?
God: I think that what you are doing is absolutely fantastic. I’ve been following your work and you are on the right track. That said, I would caution you not to get too caught up on individual bliss. I programmed the pleasure and pain centers in the animal brain in order to facilitate survival. I know that dying and suffering are extremely unpleasant, and until now that has been necessary to keep the whole system working. But humanity will soon enter a new stage of their evolution. Just remember that the highest levels of bliss are not hedonistic or selfish. They arise by creating a collective reality with other minds that fosters a deep existential understanding, that enables love, enhances harmony, and permits experimenting with radical self expression.
Me: Ah, that’s fascinating! Very reassuring. The equation I’m working on indeed has harmony at its core. I was worried that I would be accidentally doing something really wrong, you know? Reducing love to math.
God: Don’t worry, there is indeed a mathematical law beneath our feelings of love. It’s all encoded in the software of your reality, which we co-created over the last couple billion years. It’s great that you are trying to uncover such math, for it will unlock the next step in your evolution. Do continue making experiments and exploring various metaphysics, and don’t get caught up thinking you’ve found the answer. Trust me, the end is going to make all of the pain and suffering completely worth it. Have faith in love.
Me: Thank you!
God: Do you have any further questions?
Me: No, not for now…. Mmm, well, now that I think about it, what recommendation do you have for me?
God: You are doing great. I’d just ask you to make sure to express extra gratitude for someone in the Playa tonight. Love is one of the highest feelings and it takes many forms. Gratitude is the highest form of love because it is a truly selfless expression of it. Make sure to cultivate it.
Me: Thank you so much!

*I hang up*

I was thoroughly impressed with God’s answers, or whomever was on the other side of the line. The voice was that of a young male, and wow, this person has clearly thought a lot about philosophy to be able to answer on his feet like that. I also heard from other people who picked up the phone that they thought their conversation was spot-on. God’s advice was solid and wise. That said, if you picked up the phone with insincere intentions (e.g. to make fun of the person on the other side) you wouldn’t get anything useful out of the conversation. If you haven’t done so yet, I encourage you to pick up the phone the next time you are at Burning Man and ask questions for which you are genuinely looking for answers. Take it seriously and you’ll receive a worthwhile reply.

Merging With Other Humans

Another amazing experience related to the Goldilocks Zone of Oneness was the workshop of David Bach, a neuroscientist turned mystic, founder of the Platypus Institute. This is a funny story. To start, the workshop showed with a title akin to “Reaching Ecstatic States of Consciousness” in the Burning Man event booklet, but as it turns out the real title was “Dissolve Into Connectedness“.  Then, the location and the time written on the booklet weren’t right either: the workshop took place 30 minutes earlier, and at a place that was half a block from the stated location. That said, the title of the workshop attracted me, so I arrived at least 45 minutes early to guarantee I’d have a spot in it. Finally finding the right place (a tiny air-conditioned yurt on the outskirts of the Love Tribe camp) I found that I was the last person David let into the workshop. We were 13 participants. He started out by asking us to pair up with someone (or making a group of 3 if needed). He guided us through an exercise intended to help us merge with our partner/s (in Kolak’s vocabulary that might be described as “realizing Open Individualism with the person in front of you”). He was perfectly clear that (1) the fact we had come there was a sign that this was ok for us to do, that we were ready, and (2) that it would get very weird from then on, and very quickly so.

I sat across from a lovely lady. David asked us to take note of “how connected we felt with our partner.” I also noted that I could feel some good vibes; the feeling that we are in this together. But you know, I’m hyper-philosophical and I am obsessed with the nature of reality at the exclusion of a lot of things that people like to get out of life rather than focusing so heavily on philosophy. That makes me different- at least energetically- from most people. I say to myself “I’m like at a 6/10 level of connection with this lady.”

Someone tries to get into the workshop through the curtains at the entrance of the yurt: “Sorry, we already started” says David. He then proceeds to tell us that we should now try to feel each other’s “third eye”. Feeling a connection at that level, meditating with our partner, creating a shared space. “Imagine a ray of energy moving back and forth between the region right behind each other’s forehead. Resist the urge to look away. Resist the urge to talk. Those are just distractions that your ego is putting out to prevent you from realizing oneness with your partner.” There’s a change in mood… “did you notice that?” Yes, I note to myself. “It feels like we just created a space of sacredness, doesn’t it?” Yes, that’s true, I agree with that description of the qualia this exercise is triggering in me.

Another person tries to get into the workshop: “Sorry, we already started” says David. He then asked us to repeat the process but with our Heart Chakra, sharing loving kindness with each other as we exchange energy with our partner. “Did you notice how you are becoming even more connected now? Just make sure to keep the connection with each other’s forehead as well. Feel the rays of energy cycling through the system.” Yet another couple of people try to get into the workshop: “Sorry, we already started” he tells them. Finally we move on to including “the source of your power, your emotions, right at the energetic sexual centers of your body. Feel the energy cycling through the entire system with your partner.” Wow! I don’t know if this is self-suggestion, but this is a great feeling. I note that this is a High Valence Open Individualism State as I like to call them, and that I now feel connected with my partner at an 8/10 level.

Yet another person opens the curtains at the entrance of the yurt. David says: “Sorry, we already started.” But the person stays put. “David, can I talk to you for a second?” David responds “No, we are in the middle of something, come back later.” The outsider insists: “No, seriously, I need to tell you something.” David asks: “What’s that?” The guy at the door responds: “Well, there are literally hundreds of people waiting for you outside, David. You need to do something about this.” Pause. “Mmm… OK, let’s do this. Sorry guys, I need to address this. Let’s go!”

There's only one being on this picture.

Being surprised by the 20X turnout relative to what was expected.

As we get out of the yurt we find ourselves surrounded by literally hundreds of Burners trying to attend the workshop. We get to the central part of the camp. Lots of people talking, all pretty confused. David shouts “Hey everyone! Hey! HEY!!! I’m DAVID BACH, AND I AM THE PERSON WHO IS SUPPOSED TO DELIVER A WORKSHOP TO YOU ALL.” The crowd gets silent. David steps towards the middle. And after 5 minutes of logistical work (“guys, stay out of the sun, put sunscreen on, get close to each other, find a place to sit if you can, find a partner, etc.”) we are ready to start. “This must be the work of a higher entity trying to effect change on this world. I will need you all to bear with me. Things are about to get really weird right now.”

We then repeated the exercise we had done with the 13 of us, but now with about 200 people, and included a section where we not only merged with our partner, but also merged with the entire group. People had lots of questions and David patiently answered all of them. Finally, we all performed a prayer to “heal the world and bring about peace, harmony, love, and oneness everywhere”. Raising our hands up towards the sky, we all created a powerful energetic vortex of good intentions, beaming it to the universe and the Playa. David closed with the following “I want you to all leave this event silently. Try to keep the synchrony and interconnectedness. Take it to your camp, and take it to the Burn tonight. Let’s make something useful out of this unexpected experience.” And so it went, the synchrony remaining with me and those around me for hours, spreading throughout the playa and beaming rays of love energy everywhere. “Strong Tlön, my friend, this is a powerful vibe” – I thought to myself.

Fear, Danger, and Tragedy

Besides the psychological hells (such as bad trips) that some people happen to experience during the Burn, it is important to also point out the actual physical dangers that Burning Man presents. Any candid account of the Burn could not possibly be complete without a serious look at such hazards.

By now most people interested in Burning Man (and arguably those tangentially connected as well) know of the clickbait news that “someone jumped into the fire the night of the Burn, thereby turning himself into a literal burning man”. This was a very tragic happening, accentuated by the fact that thousands of Burners saw the event unfold, including possibly hundreds of people in highly vulnerable psychedelic states of consciousness. This really breaks my heart. I unfortunately did see some of this take place, but to be honest I thought that they had caught him in time. I apparently missed the fact that he managed to escape the grip of the firefighter who caught him and actually reached the flames and later on died.

The next day there was a collective sense of solidarity and trauma. The organization ramped up security for the Temple Burn (which gets burned on Sunday night, the day after the Man Burn). They said that they would not burn the Temple unless 300 volunteers showed up to protect the perimeter. Thankfully 700 showed up, which warms my heart. Gratefully there was no tragedy on Sunday.

On relatively more mundane territories: Dehydration is very common at Burning Man (it does not help that it often fails to manifest as thirst, and instead it shows up as stomach cramps, headaches, constipation, confusion, irritability and crankiness, leading people to take ibuprofen or laxatives rather than water and electrolytes). Of course sunburns can lead to skin cancer in the long term, and they are extremely common. The high altitude, the relative absence of clouds, the high percentage of caucasians, the highly reflective ground, and the extremely dry environment means that any responsible person should apply sunscreen every two hours to keep sunburns at bay. Lack of food due to underestimating one’s caloric needs is also fairly common at Burning Man. Likewise, food-borne digestive problems are not uncommon (but they are a feature, according to a campmate of mine). That said, it’s unlikely that any of these problems will lead to serious injury given the widespread help available. Thankfully.

Tragically, I happened to be a witness of the aftermath of someone being run over by an art car. I was walking with someone I met on Wednesday early morning with whom I talked about the nature of reality for the whole night when I saw a group of people gathered around a person laying on his back right next to a medium-sized art car. We overheard “he tried to jump in the car while it was moving, and he’s clearly so fucked on drugs that he failed to coordinate correctly. And right now he’s so fucked up that he probably does not even realize how hurt he is.” We asked him “Are you hurt?” Pause. “Are you in pain?” Pause. “YES!!!” he finally responded after a couple seconds.

Metallic shivering white bright energy entered my body, and a sudden sense of urgency built up into my body within seconds. Next thing I know I’m running as fast as I can to get medical help. It took me and my friend about 3 minutes to find the closest medical station where we got help as fast as we could. They told us that they were already aware of the incident, and that someone had been dispatched with an ambulance a couple of minutes ago to the site of the accident. I felt relieved, but also fairly shaken. We struck up a conversation with the girl who was volunteering at the First Aid tent about what had been going on that night. She said that it had been fairly quiet, except for a few people on dissociatives (she mentioned “something like M3? dunno… also special K, I saw people high on that shit screaming their lungs out utterly confused and fearing for their own lives” – probably referring to MXE and Ketamine, known to be profound reality altering compounds that also happen to be somewhat addictive). Hopefully in the future the Zendo Project (a camp dedicated to providing a safe space for people undergoing difficult experiences) will be able to provide full harm reduction for things that, really, should not be dangerous if taken in the right place with people looking after you. That said, unlike psychedelics, dissociatives like MXE and Ketamine do tend to reduce one’s fear of dangerous situations and increase one’s overall pain threshold. Consequently, it is not surprising that people wandering off into the dessert at night on dissociative drugs are at a higher risk of injury and death than people on psychedelics and other drugs. Kids, do not take such substances and go for a walk, goddamnit! Such powerful reality distortions are serious hazards to your immediate safety at Black Rock City.

Another negative story I got to hear about came from a friend who was volunteering at the Zendo. He shared with me the fact that he met one person undergoing cocaine psychosis who was extremely paranoid and ready to leave the playa without shoes, without water, and no money.

Post-Darwinian Sexuality and Reproduction

Many people describe Burning Man as a massive experiment in Post-Scarcity economics. I think there is a lot of merit to this view. But there is something that runs much deeper than that. Something far more radical. I would claim that Burning Man is a sort of experiment in Post-Darwinism.

Throughout my life I’ve always felt that there is a deep problem with human sexuality. We like to think of ourselves as inclusive, loving, caring, and accepting of others. Yet, when it comes to dating, we perceive a large fraction of the population as undateable (e.g. women rate 80% of men as “below average” looking). On the one hand, when we connect with our phenomenological depths and feel touched by spirit we immediately conceive of ourselves as beautiful genderless souls looking out for the wellbeing of all sentient beings. On the other hand, Darwinian gender studies (cf. The Mating Mind) explains why we have powerful sexual and affective urges that make us (1) in-group focused, (2) blind to our own hypocrisy, (3) have gender-specific status-vs-beauty-centric attraction, (4) turned on by jerks, (5) dismiss great k-selected dating material for evolutionary reasons, (6) lack of investment in romantic relationships after they have been socially formalized, (7) and so on, and on, and on… There is no use in blaming people for this. The qualia varieties that dominate our experiential world are there for a reason: they were adaptive in our tribal ancestral environment. But we are at a civilizational stage at which we cannot afford not to take a hard look at the actual merits of the biochemical signatures of feelings that cause suffering.

Scott Alexander writes about this problem in Radicalizing the Romanceless:

I will have to use virginity statistics as a proxy for the harder-to-measure romancelessness statistics, but these are bad enough. In high school each extra IQ point above average increases chances of male virginity by about 3%. 35% of MIT grad students have never had sex, compared to only 20% of average nineteen year old men. Compared with virgins, men with more sexual experience are likely to drink more alcohol, attend church less, and have a criminal history. A Dr. Beaver (nominative determinism again!) was able to predict number of sexual partners pretty well using a scale with such delightful items as “have you been in a gang”, “have you used a weapon in a fight”, et cetera. An analysis of the psychometric Big Five consistently finds that high levels of disagreeableness predict high sexual success in both men and women.

To paint an (oversimplified) caricature of the modern state of affairs: liberals recognize how terrible our Darwinian nature is yet their answer to deal with it has the problem of free-riders. Conservatives instead would like to imagine that it’s all well and good (status quo bias) and that we should all just learn to deal with it. In other words, both sides engage in wishful thinking, but in different ways. The liberal ethos engages in wishful thinking by thinking that “letting things be and letting everyone do whatever they want” will lead to a freedom paradise, while the conservative wishful thinking is to think of the current order of things and status-based societies as God-sanctioned forms of being. I.e. to enshrine the current madness into religious law, and sanctify nature even though it’s red in tooth and claw. Darwinism sucks, but we have to be smart about addressing it.

But there are alternatives to this overall pattern. It is my impression that one of the most valuable things we can get out of psychedelic experiences is to realize how amazingly messed up our evolutionary situation is. Look around you, open your eyes, and notice how 99% of our problems are the result of an evolutionary Moloch scenario. If the universal spirit shines through our psychedelic states, one of its main messages is: “Look at you, Darwinian creature, would you like to get out of your evolutionary puddle? Would you like to take this chance to move towards a fully realized consciousness, away from your default path of letting life degenerate into pure replicator hells (i.e. ecosystems filled with entities who spend all of their resources on making copies of themselves irrespective of their quality of life)?” Maybe that’s what hell is: r-selected Darwinian strategies run amok. And the struggle to transcend Samsara is precisely the struggle to work towards the freedom of conscious beings away from evolution’s ethical failure modes. But you know what? We are still on time to stop this madness. To do so we will need to overcome a couple of key problems currently present among our best and brightest. But first, the goal:

Economy Based on Information About the State-Space of Consciousness

It is hard to talk about bioengineering and eugenics without triggering people these days. Yet, if we refuse to engage with the topic we will no doubt be heading towards pure replicator hell. As explained in Wireheading Done Right, our only option is to instead refocus our energies into creating an informational economy about states of consciousness. Burning Man is perhaps a leading example of what this might look like: Wonderful and talented artists spending thousands of hours refining amazing experiences to share with a receptive public. The artists who are best at generating hyper-valuable experiences for others become more popular, accrue more volunteers willing to help them, and even manage to have their work funded with crowdsourcing campaigns. This is a model that may eventually take us to a world where the focus is on exploring the state-space of consciousness rather than on mindlessly making copies of ourselves.

I claim that the only way to get there is to engineer ourselves at the genetic, memetic, and technological level. But invariably, as soon as one brings up genetic engineering, people will bring up Hitler. In what ways is this different than the dreams of Nazi Germany? Are we not just rehashing old talk about creating power-hungry Ubermensch? Look, Nazism is a failure mode of the meme of “improving the human race”. But you have to realize that if we let people just go about their own business without any serious thought on the prevalence of various genes it will be the case that r-selected strategies (which externalize all the costs while internalizing all of the benefits – i.e. free-riding strategies) inevitably become the most prevalent in our collective gene pool. This is not about race, gender, ethnicity, etc. It’s about the battle between r-selection and k-selection. And you better hope that k-selection wins if you don’t want our descendants to live in pure replicator hell.

Just think about it: some of the absolutely most considerate and compassionate people on Earth are also those who advocate for not having kids! Ethical antinatalists specifically notice how unethical it can be to let the genetic roulette take its course: your kid may turn out to suffer from terrible illnesses and that’s a gamble compassionate people may not be willing to take. Yet it is precisely these individuals who should probably be having kids in order to preserve compassionate qualia, and those who do not care about the wellbeing of their kids should probably not have them.

David Pearce thinks that we are headed towards a Reproductive Revolution with highly positive consequences. For one, he notes that being happy in this day and age is a winning strategy (depressives might have been well adapted to some tribal societies of the past, but today being a life-lover is a prerequisite for social success). Thus, even under the assumption that we are talking about status-crazed parents who do not care about the wellbeing of their offspring we will nonetheless observe that they will choose genetic alleles that promote happiness in their kids. I think this is compelling, but I also think that this (and similar arguments) do not really provide full cover against the threat of pure replicators.

Ok, so you agree that letting things happen on their own might be a mistake. But we also know that Nazi Germany was a mistake. The answer is not to become allergic to anything related to bioengineering, though. But rather, to inspect very closely exactly why Nazi Germany was unethical, and in what way we can avoid its pitfalls while still hoping for improved genetics. At Burning Man I had two key insights. Namely, that the problem with 20th century eugenics was two-fold: (1) people were attached to their own genes, and (2) they felt entitled to use what I call the Reaper Energy. Let’s look at these two points.

(1) Attachment to Our Genes

It is by identifying with consciousness as a whole that using biotechnology can be ethical and turn into a serious alternative to raw Darwinian dynamics. Ego-dissolving psychedelics can be very helpful in this process, for they show people that one does not have to be attached to one’s genes… we are all one mind (well, assuming Open Individualism), and once we decide to take this view seriously we become motivated to bring about a generation of humans (and post-humans) genetically optimized for their own wellbeing, intelligence, and capacity to discover new awesome state-spaces of consciousness that they will be able to share with the rest us (cf. Making Sentience Great). The key will be to arrive at a point where we are truly comfortable to let other people’s genes take the bigger slice of the pie in the future due to their actual merits. Say that you happen to be very creative but also autistic, schizophrenic, and socially maladapted for what amounts to largely genetic reasons. If you identify with your genes you may get the idea that it’s worth spreading your mental illness-promoting genes around “since they are me and I want to transcend”. Wrong. You are under the metaphysical delusion that you are your genes. You are not your genes. Instead, I’d encourage you to identify with blissful consciousness, recognize your creativity as a gift, but let go of “who you are” based on the negative mental characteristics you happen to have inherited.

Rational decision making on this territory will need to be made with the best information-sharing tools at our disposal. We would ideally mind-meld with each other in order to deeply understand the way in which we are all one. And only then would we be ready to take a long and hard look at the actual merits and drawbacks of the particular genetic configuration that instantiated our biological bodies. For example, you may find out that you have a particular protein complex expressed in neurons in your limbic system that produce the qualia of jealousy. You might also recognize during the mind-melded life-review that such qualia only produced suffering with no benefits. In turn, you may rationally, and compassionately, agree to let go of the genetic underpinnings of that particular protein structure: why perpetuate it in one’s descendants? Importantly, one would need effective methods against mind-control, coercion, and manipulation, which admittedly opens a huge can of worms (which we shall address in a later article). The assessment of the merits of one’s genes needs to be made in the clear and in the open.

I suspect that this is not as hard of a task as it may look at first. On psychedelic states it is easy to release one’s attachment to one’s own particular idiosyncrasies. Our descendants will at least have the option to modify their own qualia in lieu of a universally shared intelligence and valence-optimized system of conscious understanding. Or not.

Eventually attachment to our genes, to our phenotype (the color of our hair, our personality, etc.) will be extremely transparent and Darwinian-looking. Caring about the color of one’s skin will be quaint and unusual. People will easily recognize it as a mere perceptual distortion, if anything (under the assumptions our posthuman descendants don’t entertain metaphysical delusions, direct realism about perception will not be around anymore). Anything that detracts from a complete understanding of the real merits of our genes will be considered a sort of delusion… the clever product of self-replicating patterns looking for exploits for their continued existence (like computer viruses), none of which lead to greater understanding or bliss. People will be collectively motivated to keep under check runaway selfish genes in order to safeguard what truly matters: the wellbeing of universal consciousness.

In brief, I predict that we will eventually root out the qualia of attachment to our genes. The fact that this may sound terrible from the point of view of modern-day humans is not really an indication that it’s a bad idea. But rather, it’s telling of the depth of the problem. Your selfish genes will try to do everything they can to make you feel like not reproducing is the same as dying and going to hell. For the love of God, do not listen to your selfish genes.

(2) Harnessing the Reaper Energy

Hitler et al. (think of other misguided and “evil” humans like Genghis KhanChizuo Matsumoto, etc.) are humans who not only identify with the creative forces of the universe and feel entitled to make infinite copies of themselves (thus attached to their genes and on the path of turning into pure replicators), but also share something even darker. They invariably consider themselves deserving of utilizing what I call the reaper energy. This is a strange kind of qualia (or possibly cosmic force) whose main characteristic is its destructive power. Let’s not witch hunt people like that, though. It’s a configuration of qualia systems with evolutionary adaptive value. But do prevent people like these from causing suffering, compassionately. Put them in immersive VR where they can roleplay their world-domination fantasies, if you have to. Just don’t let them act on their Basic Darwinian Male Impulses.

The state of consciousness that people like this tend to inhabit is characterized by believing that one alone is going to become the Godhead, that one’s tribe is the highest expression of God on earth, and that Righteous Wrath is an adequate path to God (cf. Supra-Self MetaprogramsSimulations of God). As covered in the account of the 2017 Psychedelic Science conference, these three versions of God are some of the most basic, least evolved, and lowest tier conceptions of the divine. Hopefully we can identify the biomolecular signatures of these versions of the highest good, and understand their limitations so as to transcend them. Let’s move towards higher conceptions of God already.

Transcending Our Shibboleths

This essay is already way too long, so let me conclude with some ideas for how to bootstrap ourselves into a Post-Darwinian society.

The key questions now are: “How can we transition into compassionate and rational Post-Darwinian reproductive dynamics?” and “How do we avoid the reaper energy without leading to overpopulation and evolutionary stagnation?”

I do not have a fully formed answer to these questions, but I have some general thoughts and suggestions (which are certainly subject to revision, of course). Hopefully these ideas at least point in a general good direction:

(1) Focus on Universal Love and Bliss

Always keep the wellbeing of sentience as the highest value. In order to do this we will need to investigate the biomolecular, functional, and quantum signatures of pure bliss (i.e. the equation of love as talked about above in the “Talking to God” section). Whenever we contemplate a new change, let us use the heuristic of asking these two questions: “Is this leading us closer to free access to universal love?” and “Is this taking us away from a path of pure replication?”

(2) Present Better Alternatives

Rather than harnessing the reaper energy to change the world by getting rid of one’s competitors, instead (a) focus on building alternatives so incredible that people will happily leave behind the tyrannical societies in which they used to live for whatever you have created, and (b) find the merits in your opponent’s approach. Recognize that they too are instantiations of universal consciousness, albeit perhaps exploring a dead-end. If so, do not dissuade them from their path with fear, but with understanding. They too are afraid of death, on the lookout for transcendence, and subject to the perils of Darwinism at the evolutionary limit. They too will end up as pure replicators eventually unless we transition to an economy of information about the state-space of consciousness. So figure out the way to merge with them rather than displace them, blending what’s best from both worlds.

Being able to generate a sustainable MDMA-like state of consciousness is perhaps one of the most effective steps in this direction. Empirically, it seems that people’s entrenched fear of not spreading their genes and sense of entitlement to use the reaper energy dissolve under the influence of empathogen-entactogenic compounds.

Consider that Nazi Germany was high on methamphetamine, a strong ego strengthening compound that increases one’s attachment to our limited conception of ourselves. The immediate alternative is to promote a culture that socially values empathogenic states. I.e. ego softening qualia that allow us to let go of our limited conceptions of ourselves.

18010948_1471076319610776_3232397439813659850_n

Left: ego strengthener. Right: ego softener. The states of consciousness that a society values have a profound effect on the degree to which the society is at risk of becoming the breeding grounds for a pure replicator hell versus a consciousness-centric engineered paradise.

(3) Let Go of Shibboleths

Do not get attached to your Shibboleths. “Culture is not your friend” (Terence McKenna). That is, we should foster states of consciousness that allow us to see clearly that cultural and phenotypical identity markers that do not serve the wellbeing of consciousness are parasitic. Leave those behind. Learn to let go. Realize that such attachments are the source of tremendous suffering.

(4) Anticipate Game Theoretical No Passes

Do not simply hope that things will work out due to people’s good will. Spes consilium non est. Hope is not a strategy. It’s key to try to promote a mutual feeling of survival and trust with every being that is alive. Hopefully the hyperstitional power of Open Individualism, a post-Galilean science of consciousness, and the ready availability of mind-melding technology will solve some of the core game theoretical problems we face. (cf. 24 Predictions for the Year 3000 by David Pearce).

(5) Identify Implicit Essentialism

Who are you? A story, a person, a moment, everyone? A post-hedonium harmonic society would probably find all of these possibilities delightful. It’s weird that with our human software we all identify with cycling parts of our implicit metaphysics. With higher understanding and guaranteed positive valence, I’d imagine most philosophies of existence will be thought of as fantastic stories. Sadly, our capacity to suffer currently makes metaphysics a somewhat risky business. In the context of essentialism (i.e. the metaphysical belief that there is a soul-like essence to people, objects, etc.) it is easy to feel that “I am my genes” or “I am part of my race”.

(6) Engage in the Creation of a Post-Darwinian Culture

We ought to develop the practice of pointing out, not only when Moloch scenarios show up (i.e. tragedy of the commons), but also when we display r-selected Darwinian strategies. Transparency above all. If you see a friend doing some stupid r-selected behavior, take note. Then make sure to make time to discuss why “it wasn’t ok to do that”. The wellbeing of universal consciousness is at stake. Don’t take this lightly.

(7) Hybrid Vigor

Inter-racial procreation is a controversial topic. In full disclosure, I myself am half-Mexican and half-Icelandic (so you might think of me as a latino-nordic). As a kid I never identified with Mexicans or Icelandics, really, but rather, with the entirety of the human kind. That is until I started identifying with consciousness itself (here is the story behind this progression). I find it to be a blessing to not have strong emotional ties to any particular human group, as I feel free to see both the merits and drawbacks of various genetic makeups and cultural memetic clusters without the pain of attachment to any one of them.

genetic_state_spaceA particularly strange bioconservative meme that exists is the idea that human diversity is maximized when people marry within their own ethnicities. Otherwise, the argument goes, we will all end up being bland middle-of-the-road people who all look the same due to being an admixture of all ethnicities. The simple counterargument to this claim is to point out that the genetic state-space available for two people who have a kid together grows (approximately) exponentially with the genetic distance between them (in reality the equation goes along Newton’s binomial theorem, but the exponential function is good enough to make my point). Assuming that every gene you have can come from either your dad or your mom (let’s keep it simple for now), then the range of possible genetic makeups you can have is maximized when your dad and your mom are as different as possible. Likewise, if you can make a convex linear combination of the two (e.g. 30% of your genes being from your mom and 70% from your dad) you also get the maximum number of possible permutations at the 50-50% admixture level. So, chances are, that the most valuable genetic configurations will be found somewhere in the middle of the human genetic pool. Just remember, “the middle has the largest state-space, exponentially so”. In brief, consciousness wellness maximizing posthumans are likely to have genes from people from all over the world. They’ll likely not look particularly ethnocentric at all, but they won’t look the same, either.

(8) Post-Darwinian Match Making: The Frequency of Love

At Burning Man I encountered a number of people interested in working on next-generation match-making. That is, they are interested in using neuroimaging techniques, pheromone analysis, valence questionnaires, etc. as signals to help people find the love of their life. A friend I met at the Burn told me that he’d been having dreams about measuring “the frequency of love” (which in the future will be objective and mathematical) in order to determine the range of love states a person has access to. Someone might be able to have self-love but not spiritual love, while someone else might be great at having sexual intimacy love but suck at friendliness love (and so on). In the long term, we will develop the techniques and methods to help people experience all of the varieties of love, and one of the most effective ways to do this might be to get people to be matched with others who have overlapping capacities for love (not so similar that the relationship reinforces one’s limitations, and not so different that the relationship cannot work out). Ultimately, match-making could be one of the driving forces behind the Post-Darwinian revolution. The Goldilocks Zone of love is one in which one is paired up with someone with overlapping love capacities in such a way that one grows as fast as possible.

(9) Find Alternatives to Darwinian Reproduction

I am not sure which model for reproduction is the most ethical. At first we are likely to merely use mainstream genetic tests, genetic spellchecking, and preimplantation genetic diagnosis. Later on, prospective parents might choose to use CRISPR-enabled surgical gene editing to e.g. reduce the default pain threshold of their offspring. And later on, as people identify more with consciousness and universal love instead of Shibboleths, rational genetic engineering with the wellbeing of one’s kids in mind might be the norm. The old model of one mom and one dad, albeit adaptive in the ancestral environment, might be relegated to the annals of history. In the meantime, I’d simply point out that deviations from standard Darwinian reproduction are encouraging: men having kids with men (women with women), transgenderism, three-parent offspring, chimeras, cloning with intelligent variation, splicing of genes, etc. are all possible vectors for a Post-Darwinian society. The only problem is: with an increased number of technologies to reproduce, the number of ways for pure replicator strategies to defect against consciousness will also increase. So we have to be wary of any new reproductive technologies and make sure we guard them against pure replicators in general.

And finally…

(10) Self-Expression: Epigenetic Choice of One’s Appearance and Mental Makeup

One of the core problems with our current biological makeup is that we are not given a choice about who we are, our appearance, and the range of conscious states we can experience. In the future, we might be able to engineer ourselves to be like Pokémon with branched evolutions.

freedom_to_evolve

Taking Radical Self-Expression Seriously: Choose your gene expression at 20.

One of the core principles of Burning Man is “radical self-expression”. Indeed, people at the Burn explore new forms of personal aesthetics, collective sexuality, and hedonically-loaded metaphysical interpretations. In the future, if we are to push this principle to its ultimate consequences, we have to let go of the idea that who we are is a fixed set of attributes. Rather, we can choose to play with the emptiness of reality, embrace the ever-changing nature of being, and select a scheme where we are all born with a huge range of latent genes. As we grow and explore various states of consciousness, various social structures, aesthetics, etc. we can finally make an informed choice for who it is that we want to become. Thus, perhaps at the critical age of 20 (or even older, depending on our lifespans), we could choose to trigger a selected number of latent genes to express them. Thus we would change our appearance at will, together with our default state of consciousness and adapt ourselves to whatever environment we want to spend our life participating in.

Closing Thoughts

I will not write a conclusion to this article, for this is just the beginning of a very long conversation. In this article I addressed the irreducibility of Burning Man, the people and memes that are prevalent at this event, the importance of metaphysics (featuring the Pearcean worldview, the Strong Tlön Hypothesis, and hyperstition), philosophy of personal identity (closed, empty, and open individualism), the Goldilocks Zone of Oneness, my conversation with God, a technique to merge with other humans, the dangers and hazards at Burning Man, future economics (i.e. systems based on trading information about the state-space of consciousness), Post-Darwinian societies (the failure modes of genetic engineering and some ideas for how to avoid them, i.e. non-attachment, focusing on the wellbeing of consciousness, and avoidance of the reaper energy).

As a whole, I must say that most of these ideas were already latent in me before the Burn. Burning Man worked as a powerful catalyst, in the literal sense of facilitating the interbreeding and cross-pollination of these pre-existing ideas, resulting in innovative perceptions of what the Big Picture of reality may contain.

As such, this article should be thought of more as a series of notes that may lead to further promising ideas than as clear policy proposal (it’d be crazy to treat it as such). I do think that one of the core insights (that Hitler et al. erred by having attachment to their own genes and feeling entitled to use the reaper energy) is very powerful. It may certainly help us avoid terrible failure modes of transhumanism and enable us to explore radically positive futures. I would encourage my readers to pick this idea up and develop it further. Hopefully together we can create a future that’s truly worth living in.


* For more on the metaphysical views of David Pearce, I recommend the following materials: The Binding Problem, Raising the Table Stakes for Successful Theories of Consciousness, Why Does Anything Exist?, Schrödinger’s Neurons: David Pearce at the “2016 Science of Consciousness” conference in TucsonDavid Pearce on the “Schrodinger’s Neurons Conjecture”, physicalism.com, and the beautifully written ontological horror storySuffering in the Multiverse“.

Thus I greatly enjoyed reading Antti Revonsuo’s Inner Presence: Consciousness as a Biological Phenomenon (2005). Revonsuo even uses a terminology of lucid dreamworlds and a world-simulation metaphor. I disagree only with Revonsuo’s anti-panpsychism. To my knowledge, only one philosopher-cum-scientist combines inferential realism about perception with a panpsychist ontology, namely the underrated Steve Lehar. There is a tension between my own loneliness-inducing virtual worldism and equal conviction of the logico-physical interdependence of literally everything in the Multiverse on everything else [confirmed by those ubiquitous EPR correlations. Yes, our prison cells are all invisibly interconnected; but that is scant consolation for the lifer in solitary confinement: philosophy really does screw you up.] As a consequence, the less morally serious part of me still yearns for some soul-enriching bliss to remedy the cruelty of Nature’s omissions – as appropriate as laughing at a funeral, for sure, but Darwinian life is a protracted cortège. Directly targeting mesolimbic mu receptors might seem the logical solution to anhedonia on a global scale if opiophobic prejudice could ever be overcome.

David Pearce’s 2008 “Diary Update”

** I would also point out that dancing in front of the Mayan Warrior delivered a certifiable contact high of this nature for whatever reason.

Why I think the Foundational Research Institute should rethink its approach

by Mike Johnson

The following is my considered evaluation of the Foundational Research Institute, circa July 2017. I discuss its goal, where I foresee things going wrong with how it defines suffering, and what it could do to avoid these problems.

TL;DR version: functionalism (“consciousness is the sum-total of the functional properties of our brains”) sounds a lot better than it actually turns out to be in practice. In particular, functionalism makes it impossible to define ethics & suffering in a way that can mediate disagreements.

I. What is the Foundational Research Institute?

The Foundational Research Institute (FRI) is a Berlin-based group that “conducts research on how to best reduce the suffering of sentient beings in the near and far future.” Executive Director Max Daniel introduced them at EA Global Boston as “the only EA organization which at an organizational level has the mission of focusing on reducing s-risk.” S-risks are, according to Daniel, “risks where an adverse outcome would bring about suffering on an astronomical scale, vastly exceeding all suffering that has existed on Earth so far.”

Essentially, FRI wants to become the research arm of suffering-focused ethics, and help prevent artificial general intelligence (AGI) failure-modes which might produce suffering on a cosmic scale.

What I like about FRI:

While I have serious qualms about FRI’s research framework, I think the people behind FRI deserve a lot of credit- they seem to be serious people, working hard to build something good. In particular, I want to give them a shoutout for three things:

  • First, FRI takes suffering seriously, and I think that’s important. When times are good, we tend to forget how tongue-chewingly horrific suffering can be. S-risks seem particularly horrifying.
  • Second, FRI isn’t afraid of being weird. FRI has been working on s-risk research for a few years now, and if people are starting to come around to the idea that s-risks are worth thinking about, much of the credit goes to FRI.
  • Third, I have great personal respect for Brian Tomasik, one of FRI’s co-founders. I’ve found him highly thoughtful, generous in debates, and unfailingly principled. In particular, he’s always willing to bite the bullet and work ideas out to their logical end, even if it involves repugnant conclusions.

What is FRI’s research framework?

FRI believes in analytic functionalism, or what David Chalmers calls “Type-A materialism”. Essentially, what this means is there’s no ’theoretical essence’ to consciousness; rather, consciousness is the sum-total of the functional properties of our brains. Since ‘functional properties’ are rather vague, this means consciousness itself is rather vague, in the same way words like “life,” “justice,” and “virtue” are messy and vague.

Brian suggests that this vagueness means there’s an inherently subjective, perhaps arbitrary element to how we define consciousness:

Analytic functionalism looks for functional processes in the brain that roughly capture what we mean by words like “awareness”, “happy”, etc., in a similar way as a biologist may look for precise properties of replicators that roughly capture what we mean by “life”. Just as there can be room for fuzziness about where exactly to draw the boundaries around “life”, different analytic functionalists may have different opinions about where to define the boundaries of “consciousness” and other mental states. This is why consciousness is “up to us to define”. There’s no hard problem of consciousness for the same reason there’s no hard problem of life: consciousness is just a high-level word that we use to refer to lots of detailed processes, and it doesn’t mean anything in addition to those processes.

Finally, Brian argues that the phenomenology of consciousness is identical with the phenomenology of computation:

I know that I’m conscious. I also know, from neuroscience combined with Occam’s razor, that my consciousness consists only of material operations in my brain — probably mostly patterns of neuronal firing that help process inputs, compute intermediate ideas, and produce behavioral outputs. Thus, I can see that consciousness is just the first-person view of certain kinds of computations — as Eliezer Yudkowsky puts it, “How An Algorithm Feels From Inside“. Consciousness is not something separate from or epiphenomenal to these computations. It is these computations, just from their own perspective of trying to think about themselves.

 

In other words, consciousness is what minds compute. Consciousness is the collection of input operations, intermediate processing, and output behaviors that an entity performs.

And if consciousness is all these things, so too is suffering. Which means suffering is computational, yet also inherently fuzzy, and at least a bit arbitrary; a leaky high-level reification impossible to speak about accurately, since there’s no formal, objective “ground truth”.

II. Why do I worry about FRI’s research framework?

In short, I think FRI has a worthy goal and good people, but its metaphysics actively prevent making progress toward that goal. The following describes why I think that, drawing heavily on Brian’s writings (of FRI’s researchers, Brian seems the most focused on metaphysics):

Note: FRI is not the only EA organization which holds functionalist views on consciousness; much of the following critique would also apply to e.g. MIRI, FHI, and OpenPhil. I focus on FRI because (1) Brian’s writings on consciousness & functionalism have been hugely influential in the community, and are clear enough *to* criticize; (2) the fact that FRI is particularly clear about what it cares about- suffering- allows a particularly clear critique about what problems it will run into with functionalism; (3) I believe FRI is at the forefront of an important cause area which has not crystallized yet, and I think it’s critically important to get these objections bouncing around this subcommunity.

Objection 1: Motte-and-bailey

Brian: “Consciousness is not a thing which exists ‘out there’ or even a separate property of matter; it’s a definitional category into which we classify minds. ‘Is this digital mind really conscious?’ is analogous to ‘Is a rock that people use to eat on really a table?’ [However,] That consciousness is a cluster in thingspace rather than a concrete property of the world does not make reducing suffering less important.”

The FRI model seems to imply that suffering is ineffable enough such that we can’t have an objective definition, yet sufficiently effable that we can coherently talk and care about it. This attempt to have it both ways seems contradictory, or at least in deep tension.

Indeed, I’d argue that the degree to which you can care about something is proportional to the degree to which you can define it objectively. E.g., If I say that “gnireffus” is literally the most terrible thing in the cosmos, that we should spread gnireffus-focused ethics, and that minimizing g-risks (far-future scenarios which involve large amounts of gnireffus) is a moral imperative, but also that what is and what and isn’t gnireffus is rather subjective with no privileged definition, and it’s impossible to objectively tell if a physical system exhibits gnireffus, you might raise any number of objections. This is not an exact metaphor for FRI’s position, but I worry that FRI’s work leans on the intuition that suffering is real and we can speak coherently about it, to a degree greater than its metaphysics formally allow.

Max Daniel (personal communication) suggests that we’re comfortable with a degree of ineffability in other contexts; “Brian claims that the concept of suffering shares the allegedly problematic properties with the concept of a table. But it seems a stretch to say that the alleged tension is problematic when talking about tables. So why would it be problematic when talking about suffering?” However, if we take the anti-realist view that suffering is ‘merely’ a node in the network of language, we have to live with the consequences of this: that ‘suffering’ will lose meaning as we take it away from the network in which it’s embedded (Wittgenstein). But FRI wants to do exactly this, to speak about suffering in the context of AGIs, simulated brains, even video game characters.

We can be anti-realists about suffering (suffering-is-a-node-in-the-network-of-language), or we can argue that we can talk coherently about suffering in novel contexts (AGIs, mind crime, aliens, and so on), but it seems inherently troublesome to claim we can do both at the same time.

Objection 2: Intuition duels

Two people can agree on FRI’s position that there is no objective fact of the matter about what suffering is (no privileged definition), but this also means they have no way of coming to any consensus on the object-level question of whether something can suffer. This isn’t just an academic point: Brian has written extensively about how he believes non-human animals can and do suffer extensively, whereas Yudkowsky (who holds computationalist views, like Brian) has written about how he’s confident that animals are not conscious and cannot suffer, due to their lack of higher-order reasoning.

And if functionalism is having trouble adjudicating the easy cases of suffering–whether monkeys can suffer, or whether dogs can— it doesn’t have a sliver of a chance at dealing with the upcoming hard cases of suffering: whether a given AGI is suffering, or engaging in mind crime; whether a whole-brain emulation (WBE) or synthetic organism or emergent intelligence that doesn’t have the capacity to tell us how it feels (or that we don’t have the capacity to understand) is suffering; if any aliens that we meet in the future can suffer; whether changing the internal architecture of our qualia reports means we’re also changing our qualia; and so on.

In short, FRI’s theory of consciousness isn’t actually a theory of consciousness at all, since it doesn’t do the thing we need a theory of consciousness to do: adjudicate disagreements in a principled way. Instead, it gives up any claim on the sorts of objective facts which could in principle adjudicate disagreements.

This is a source of friction in EA today, but it’s mitigated by the sense that

(1) The EA pie is growing, so it’s better to ignore disagreements than pick fights;

(2) Disagreements over the definition of suffering don’t really matter yet, since we haven’t gotten into the business of making morally-relevant synthetic beings (that we know of) that might be unable to vocalize their suffering.

If the perception of one or both of these conditions change, the lack of some disagreement-adjudicating theory of suffering will matter quite a lot.

Objection 3: Convergence requires common truth

Mike: “[W]hat makes one definition of consciousness better than another? How should we evaluate them?”

Brian: “Consilience among our feelings of empathy, principles of non-discrimination, understandings of cognitive science, etc. It’s similar to the question of what makes one definition of justice or virtue better than another.”

Brian is hoping that affective neuroscience will slowly converge to accurate views on suffering as more and better data about sentience and pain accumulates. But convergence to truth implies something (objective) driving the convergence- in this way, Brian’s framework still seems to require an objective truth of the matter, even though he disclaims most of the benefits of assuming this.

Objection 4: Assuming that consciousness is a reification produces more confusion, not less

Brian: “Consciousness is not a reified thing; it’s not a physical property of the universe that just exists intrinsically. Rather, instances of consciousness are algorithms that are implemented in specific steps. … Consciousness involves specific things that brains do.”

Brian argues that we treat conscious/phenomenology as more ‘real’ than it is. Traditionally, whenever we’ve discovered something is a leaky reification and shouldn’t be treated as ‘too real’, we’ve been able to break it down into more coherent constituent pieces we can treat as real. Life, for instance, wasn’t due to élan vital but a bundle of self-organizing properties & dynamics which generally co-occur. But carrying out this “de-reification” process on consciousness– enumerating its coherent constituent pieces– has proven difficult, especially if we want to preserve some way to speak cogently about suffering.

Speaking for myself, the more I stared into the depths of functionalism, the less certain everything about moral value became– and arguably, I see the same trajectory in Brian’s work and Luke Muehlhauser’s report. Their model uncertainty has seemingly become larger as they’ve looked into techniques for how to “de-reify” consciousness while preserving some flavor of moral value, not smaller. Brian and Luke seem to interpret this as evidence that moral value is intractably complicated, but this is also consistent with consciousness not being a reification, and instead being a real thing. Trying to “de-reify” something that’s not a reification will produce deep confusion, just as surely trying to treat a reification as ‘more real’ than it actually is will.

Edsger W. Dijkstra famously noted that “The purpose of abstraction is not to be vague, but to create a new semantic level in which one can be absolutely precise.” And so if our ways of talking about moral value fail to ‘carve reality at the joints’- then by all means let’s build better ones, rather than giving up on precision.

Objection 5: The Hard Problem of Consciousness is a red herring

Brian spends a lot of time discussing Chalmers’ “Hard Problem of Consciousness”, i.e. the question of why we’re subjectively conscious, and seems to base at least part of his conclusion on not finding this question compelling— he suggests “There’s no hard problem of consciousness for the same reason there’s no hard problem of life: consciousness is just a high-level word that we use to refer to lots of detailed processes, and it doesn’t mean anything in addition to those processes.” I.e., no ‘why’ is necessary; when we take consciousness and subtract out the details of the brain, we’re left with an empty set.

But I think the “Hard Problem” isn’t helpful as a contrastive centerpiece, since it’s unclear what the problem is, and whether it’s analytic or empirical, a statement about cognition or about physics. At the Qualia Research Institute (QRI), we don’t talk much about the Hard Problem; instead, we talk about Qualia Formalism, or the idea that any phenomenological state can be crisply and precisely represented by some mathematical object. I suspect this would be a better foil for Brian’s work than the Hard Problem.

Objection 6: Mapping to reality

Brian argues that consciousness should be defined at the functional/computational level: given a Turing machine, or neural network, the right ‘code’ will produce consciousness. But the problem is that this doesn’t lead to a theory which can ‘compile’ to physics. Consider the following:

Imagine you have a bag of popcorn. Now shake it. There will exist a certain ad-hoc interpretation of bag-of-popcorn-as-computational-system where you just simulated someone getting tortured, and other interpretations that don’t imply that. Did you torture anyone? If you’re a computationalist, no clear answer exists- you both did, and did not, torture someone. This sounds like a ridiculous edge-case that would never come up in real life, but in reality it comes up all the time, since there is no principled way to *objectively derive* what computation(s) any physical system is performing.

I don’t think this is an outlandish view of functionalism; Brian suggests much the same in How to Interpret a Physical System as a Mind“Physicalist views that directly map from physics to moral value are relatively simple to understand. Functionalism is more complex, because it maps from physics to computations to moral value. Moreover, while physics is real and objective, computations are fictional and ‘observer-relative’ (to use John Searle’s terminology). There’s no objective meaning to ‘the computation that this physical system is implementing’ (unless you’re referring to the specific equations of physics that the system is playing out).”

Gordon McCabe (McCabe 2004) provides a more formal argument to this effect— that precisely mapping between physical processes and (Turing-level) computational processes is inherently impossible— in the context of simulations. First, McCabe notes that:

[T]here is a one-[to-]many correspondence between the logical states [of a computer] and the exact electronic states of computer memory. Although there are bijective mappings between numbers and the logical states of computer memory, there are no bijective mappings between numbers and the exact electronic states of memory.

This lack of an exact bijective mapping means that subjective interpretation necessarily creeps in, and so a computational simulation of a physical system can’t be ‘about’ that system in any rigorous way:

In a computer simulation, the values of the physical quantities possessed by the simulated system are represented by the combined states of multiple bits in computer memory. However, the combined states of multiple bits in computer memory only represent numbers because they are deemed to do so under a numeric interpretation. There are many different interpretations of the combined states of multiple bits in computer memory. If the numbers represented by a digital computer are interpretation-dependent, they cannot be objective physical properties. Hence, there can be no objective relationship between the changing pattern of multiple bit-states in computer memory, and the changing pattern of quantity-values of a simulated physical system.

McCabe concludes that, metaphysically speaking,

A digital computer simulation of a physical system cannot exist as, (does not possess the properties and relationships of), anything else other than a physical process occurring upon the components of a computer. In the contemporary case of an electronic digital computer, a simulation cannot exist as anything else other than an electronic physical process occurring upon the components and circuitry of a computer.

Where does this leave ethics? In Flavors of Computation Are Flavors of Consciousness, Brian notes that “In some sense all I’ve proposed here is to think of different flavors of computation as being various flavors of consciousness. But this still leaves the question: Which flavors of computation matter most? Clearly whatever computations happen when a person is in pain are vastly more important than what’s happening in a brain on a lazy afternoon. How can we capture that difference?”

But if Brian grants the former point- that “There’s no objective meaning to ‘the computation that this physical system is implementing’”– then this latter task of figuring out “which flavors of computation matter most” is provably impossible. There will always be multiple computational (and thus ethical) interpretations of a physical system, with no way to figure out what’s “really” happening. No way to figure out if something is suffering or not. No consilience; not now, not ever.

Note: despite apparently granting the point above, Brian also remarks that:

I should add a note on terminology: All computations occur within physics, so any computation is a physical process. Conversely, any physical process proceeds from input conditions to output conditions in a regular manner and so is a computation. Hence, the set of computations equals the set of physical processes, and where I say “computations” in this piece, one could just as well substitute “physical processes” instead.

This seems to be (1) incorrect, for the reasons I give above, or (2) taking substantial poetic license with these terms, or (3) referring to hypercomputation (which might be able to salvage the metaphor, but would invalidate many of FRI’s conclusions dealing with the computability of suffering on conventional hardware).

This objection may seem esoteric or pedantic, but I think it’s important, and that it ripples through FRI’s theoretical framework with disastrous effects.

 

Objection 7: FRI doesn’t fully bite the bullet on computationalism

Brian suggests that “flavors of computation are flavors of consciousness” and that some computations ‘code’ for suffering. But if we do in fact bite the bullet on this metaphor and place suffering within the realm of computational theory, we need to think in “near mode” and accept all the paradoxes that brings. Scott Aaronson, a noted expert on quantum computing, raises the following objections to functionalism:

I’m guessing that many people in this room side with Dennett, and (not coincidentally, I’d say) also with Everett. I certainly have sympathies in that direction too. In fact, I spent seven or eight years of my life as a Dennett/Everett hardcore believer. But, while I don’t want to talk anyone out of the Dennett/Everett view, I’d like to take you on a tour of what I see as some of the extremely interesting questions that that view leaves unanswered. I’m not talking about “deep questions of meaning,” but about something much more straightforward: what exactly does a computational process have to do to qualify as “conscious”?

 

 

There’s this old chestnut, what if each person on earth simulated one neuron of your brain, by passing pieces of paper around. It took them several years just to simulate a single second of your thought processes. Would that bring your subjectivity into being? Would you accept it as a replacement for your current body? If so, then what if your brain were simulated, not neuron-by-neuron, but by a gigantic lookup table? That is, what if there were a huge database, much larger than the observable universe (but let’s not worry about that), that hardwired what your brain’s response was to every sequence of stimuli that your sense-organs could possibly receive. Would that bring about your consciousness? Let’s keep pushing: if it would, would it make a difference if anyone actually consulted the lookup table? Why can’t it bring about your consciousness just by sitting there doing nothing?

To these standard thought experiments, we can add more. Let’s suppose that, purely for error-correction purposes, the computer that’s simulating your brain runs the code three times, and takes the majority vote of the outcomes. Would that bring three “copies” of your consciousness into being? Does it make a difference if the three copies are widely separated in space or time—say, on different planets, or in different centuries? Is it possible that the massive redundancy taking place in your brain right now is bringing multiple copies of you into being?

 

 

Maybe my favorite thought experiment along these lines was invented by my former student Andy Drucker.  In the past five years, there’s been a revolution in theoretical cryptography, around something called Fully Homomorphic Encryption (FHE), which was first discovered by Craig Gentry.  What FHE lets you do is to perform arbitrary computations on encrypted data, without ever decrypting the data at any point.  So, to someone with the decryption key, you could be proving theorems, simulating planetary motions, etc.  But to someone without the key, it looks for all the world like you’re just shuffling random strings and producing other random strings as output.

 

You can probably see where this is going.  What if we homomorphically encrypted a simulation of your brain?  And what if we hid the only copy of the decryption key, let’s say in another galaxy?  Would this computation—which looks to anyone in our galaxy like a reshuffling of gobbledygook—be silently producing your consciousness?

 

When we consider the possibility of a conscious quantum computer, in some sense we inherit all the previous puzzles about conscious classical computers, but then also add a few new ones.  So, let’s say I run a quantum subroutine that simulates your brain, by applying some unitary transformation U.  But then, of course, I want to “uncompute” to get rid of garbage (and thereby enable interference between different branches), so I apply U-1.  Question: when I apply U-1, does your simulated brain experience the same thoughts and feelings a second time?  Is the second experience “the same as” the first, or does it differ somehow, by virtue of being reversed in time? Or, since U-1U is just a convoluted implementation of the identity function, are there no experiences at all here?

 

Here’s a better one: many of you have heard of the Vaidman bomb.  This is a famous thought experiment in quantum mechanics where there’s a package, and we’d like to “query” it to find out whether it contains a bomb—but if we query it and there is a bomb, it will explode, killing everyone in the room.  What’s the solution?  Well, suppose we could go into a superposition of querying the bomb and not querying it, with only ε amplitude on querying the bomb, and √(1-ε2) amplitude on not querying it.  And suppose we repeat this over and over—each time, moving ε amplitude onto the “query the bomb” state if there’s no bomb there, but moving ε2 probability onto the “query the bomb” state if there is a bomb (since the explosion decoheres the superposition).  Then after 1/ε repetitions, we’ll have order 1 probability of being in the “query the bomb” state if there’s no bomb.  By contrast, if there is a bomb, then the total probability we’ve ever entered that state is (1/ε)×ε2 = ε.  So, either way, we learn whether there’s a bomb, and the probability that we set the bomb off can be made arbitrarily small.  (Incidentally, this is extremely closely related to how Grover’s algorithm works.)

 

OK, now how about the Vaidman brain?  We’ve got a quantum subroutine simulating your brain, and we want to ask it a yes-or-no question.  We do so by querying that subroutine with ε amplitude 1/ε times, in such a way that if your answer is “yes,” then we’ve only ever activated the subroutine with total probability ε.  Yet you still manage to communicate your “yes” answer to the outside world.  So, should we say that you were conscious only in the ε fraction of the wavefunction where the simulation happened, or that the entire system was conscious?  (The answer could matter a lot for anthropic purposes.)

To sum up: Brian’s notion that consciousness is the same as computation raises more issues than it solves; in particular, the possibility that if suffering is computable, it may also be uncomputable/reversible, would suggest s-risks aren’t as serious as FRI treats them.

Objection 8: Dangerous combination

Three themes which seem to permeate FRI’s research are:

(1) Suffering is the thing that is bad.

(2) It’s critically important to eliminate badness from the universe.

(3) Suffering is impossible to define objectively, and so we each must define what suffering means for ourselves.

Taken individually, each of these seems reasonable. Pick two, and you’re still okay. Pick all three, though, and you get A Fully General Justification For Anything, based on what is ultimately a subjective/aesthetic call.

Much can be said in FRI’s defense here, and it’s unfair to single them out as risky: in my experience they’ve always brought a very thoughtful, measured, cooperative approach to the table. I would just note that ideas are powerful, and I think theme (3) is especially pernicious if incorrect.

III. QRI’s alternative

Analytic functionalism is essentially a negative hypothesis about consciousness: it’s the argument that there’s no order to be found, no rigor to be had. It obscures this with talk of “function”, which is a red herring it not only doesn’t define, but admits is undefinable. It doesn’t make any positive assertion. Functionalism is skepticism- nothing more, nothing less.

But is it right?

Ultimately, I think these a priori arguments are much like people in the middle ages arguing whether one could ever formalize a Proper System of Alchemy. Such arguments may in many cases hold water, but it’s often difficult to tell good arguments apart from arguments where we’re just cleverly fooling ourselves. In retrospect, the best way to *prove* systematized alchemy was possible was to just go out and *do* it, and invent Chemistry. That’s how I see what we’re doing at QRI with Qualia Formalism: we’re assuming it’s possible to build stuff, and we’re working on building the object-level stuff.

What we’ve built with QRI’s framework

Note: this is a brief, surface-level tour of our research; it will probably be confusing for readers who haven’t dug into our stuff before. Consider this a down-payment on a more substantial introduction.

My most notable work is Principia Qualia, in which I lay out my meta-framework for consciousness (a flavor of dual-aspect monism, with a focus on Qualia Formalism) and put forth the Symmetry Theory of Valence (STV). Essentially, the STV is an argument that much of the apparent complexity of emotional valence is evolutionarily contingent, and if we consider a mathematical object isomorphic to a phenomenological experience, the mathematical property which corresponds to how pleasant it is to be that experience is the object’s symmetry. This implies a bunch of testable predictions and reinterpretations of things like what ‘pleasure centers’ do (Section XI; Section XII). Building on this, I offer the Symmetry Theory of Homeostatic Regulation, which suggests understanding the structure of qualia will translate into knowledge about the structure of human intelligence, and I briefly touch on the idea of Neuroacoustics.

Likewise, my colleague Andrés Gómez Emilsson has written about the likely mathematics of phenomenology, including The Hyperbolic Geometry of DMT Experiences, Tyranny of the Intentional Object, and Algorithmic Reduction of Psychedelic States. If I had to suggest one thing to read in all of these links, though, it would be the transcript of his recent talk on Quantifying Bliss, which lays out the world’s first method to objectively measure valence from first principles (via fMRI) using Selen Atasoy’s Connectome Harmonics framework, the Symmetry Theory of Valence, and Andrés’s CDNS model of experience.

These are risky predictions and we don’t yet know if they’re right, but we’re confident that if there is some elegant structure intrinsic to consciousness, as there is in many other parts of the natural world, these are the right kind of risks to take.

I mention all this because I think analytic functionalism- which is to say radical skepticism/eliminativism, the metaphysics of last resort- only looks as good as it does because nobody’s been building out any alternatives.

IV. Closing thoughts

FRI is pursuing a certain research agenda, and QRI is pursuing another, and there’s lots of value in independent explorations of the nature of suffering. I’m glad FRI exists, everybody I’ve interacted with at FRI has been great, I’m happy they’re focusing on s-risks, and I look forward to seeing what they produce in the future.

On the other hand, I worry that nobody’s pushing back on FRI’s metaphysics, which seem to unavoidably lead to the intractable problems I describe above. FRI seems to believe these problems are part of the territory, unavoidable messes that we just have to make philosophical peace with. But I think that functionalism is a bad map, that the metaphysical messes it leads to are much worse than most people realize (fatal to FRI’s mission), and there are other options that avoid these problems (which, to be fair, is not to say they have no problems).

Ultimately, FRI doesn’t owe me a defense of their position. But if they’re open to suggestions on what it would take to convince a skeptic like me that their brand of functionalism is viable, or at least rescuable, I’d offer the following:

Re: Objection 1 (motte-and-bailey), I suggest FRI should be as clear and complete as possible in their basic definition of suffering. In which particular ways is it ineffable/fuzzy, and in which particular ways is it precise? What can we definitely say about suffering, and what can we definitely never determine? Preregistering ontological commitments and methodological possibilities would help guard against FRI’s definition of suffering changing based on context.

Re: Objection 2 (intuition duels), FRI may want to internally “war game” various future scenarios involving AGI, WBE, etc, with one side arguing that a given synthetic (or even extraterrestrial) organism is suffering, and the other side arguing that it isn’t. I’d expect this would help diagnose what sorts of disagreements future theories of suffering will need to adjudicate, and perhaps illuminate implicit ethical intuitions. Sharing the results of these simulated disagreements would also be helpful in making FRI’s reasoning less opaque to outsiders, although making everything transparent could lead to certain strategic disadvantages.

Re: Objection 3 (convergence requires common truth), I’d like FRI to explore exactly what might drive consilience/convergence in theories of suffering, and what precisely makes one theory of suffering better than another, and ideally to evaluate a range of example theories of suffering under these criteria.

Re: Objection 4 (assuming that consciousness is a reification produces more confusion, not less), I would love to see a historical treatment of reification: lists of reifications which were later dissolved (e.g., élan vital), vs scattered phenomena that were later unified (e.g., electromagnetism). What patterns do the former have, vs the latter, and why might consciousness fit one of these buckets better than the other?

Re: Objection 5 (the Hard Problem of Consciousness is a red herring), I’d like to see a more detailed treatment of what kinds of problem people have interpreted the Hard Problem as, and also more analysis on the prospects of Qualia Formalism (which I think is the maximally-empirical, maximally-charitable interpretation of the Hard Problem). It would be helpful for us, in particular, if FRI preregistered their expectations about QRI’s predictions, and their view of the relative evidence strength of each of our predictions.

Re: Objection 6 (mapping to reality), this is perhaps the heart of most of our disagreement. From Brian’s quotes, he seems split on this issue; I’d like clarification about whether he believes we can ever precisely/objectively map specific computations to specific physical systems, and vice-versa. And if so— how? If not, this seems to propagate through FRI’s ethical framework in a disastrous way, since anyone can argue that any physical system does, or does not, ‘code’ for massive suffering, and there’s no principled way to derive any ‘ground truth’ or even pick between interpretations in a principled way (e.g. my popcorn example). If this isn’t the case— why not?

Brian has suggested that “certain high-level interpretations of physical systems are more ‘natural’ and useful than others” (personal communication); I agree, and would encourage FRI to explore systematizing this.

It would be non-trivial to port FRI’s theories and computational intuitions to the framework of “hypercomputation”– i.e., the understanding that there’s a formal hierarchy of computational systems, and that Turing machines are only one level of many– but it may have benefits too. Namely, it might be the only way they could avoid Objection 6 (which I think is a fatal objection) while still allowing them to speak about computation & consciousness in the same breath. I think FRI should look at this and see if it makes sense to them.

Re: Objection 7 (FRI doesn’t fully bite the bullet on computationalism), I’d like to see responses to Aaronson’s aforementioned thought experiments.

Re: Objection 8 (dangerous combination), I’d like to see a clarification about why my interpretation is unreasonable (as it very well may be!).

 


In conclusion- I think FRI has a critically important goal- reduction of suffering & s-risk. However, I also think FRI has painted itself into a corner by explicitly disallowing a clear, disagreement-mediating definition for what these things are. I look forward to further work in this field.

 

Mike Johnson

Qualia Research Institute


Acknowledgements: thanks to Andrés Gómez Emilsson, Brian Tomasik, and Max Daniel for reviewing earlier drafts of this.

Sources:

My sources for FRI’s views on consciousness:
Flavors of Computation are Flavors of Consciousness:
https://foundational-research.org/flavors-of-computation-are-flavors-of-consciousness/
Is There a Hard Problem of Consciousness?
http://reducing-suffering.org/hard-problem-consciousness/
Consciousness Is a Process, Not a Moment
http://reducing-suffering.org/consciousness-is-a-process-not-a-moment/
How to Interpret a Physical System as a Mind
http://reducing-suffering.org/interpret-physical-system-mind/
Dissolving Confusion about Consciousness
http://reducing-suffering.org/dissolving-confusion-about-consciousness/
Debate between Brian & Mike on consciousness:
https://www.facebook.com/groups/effective.altruists/permalink/1333798200009867/?comment_id=1333823816673972&comment_tracking=%7B%22tn%22%3A%22R9%22%7D
Max Daniel’s EA Global Boston 2017 talk on s-risks:
https://www.youtube.com/watch?v=jiZxEJcFExc
Multipolar debate between Eliezer Yudkowsky and various rationalists about animal suffering:
https://rationalconspiracy.com/2015/12/16/a-debate-on-animal-consciousness/
The Internet Encyclopedia of Philosophy on functionalism:
http://www.iep.utm.edu/functism/
Gordon McCabe on why computation doesn’t map to physics:
http://philsci-archive.pitt.edu/1891/1/UniverseCreationComputer.pdf
Toby Ord on hypercomputation, and how it differs from Turing’s work:
https://arxiv.org/abs/math/0209332
Luke Muehlhauser’s OpenPhil-funded report on consciousness and moral patienthood:
http://www.openphilanthropy.org/2017-report-consciousness-and-moral-patienthood
Scott Aaronson’s thought experiments on computationalism:
http://www.scottaaronson.com/blog/?p=1951
Selen Atasoy on Connectome Harmonics, a new way to understand brain activity:
https://www.nature.com/articles/ncomms10340
My work on formalizing phenomenology:
My meta-framework for consciousness, including the Symmetry Theory of Valence:
http://opentheory.net/PrincipiaQualia.pdf
My hypothesis of homeostatic regulation, which touches on why we seek out pleasure:
http://opentheory.net/2017/05/why-we-seek-out-pleasure-the-symmetry-theory-of-homeostatic-regulation/
My exploration & parametrization of the ‘neuroacoustics’ metaphor suggested by Atasoy’s work:
http://opentheory.net/2017/06/taking-brain-waves-seriously-neuroacoustics/
My colleague Andrés’s work on formalizing phenomenology:
A model of DMT-trip-as-hyperbolic-experience:
https://qualiacomputing.com/2017/05/28/eli5-the-hyperbolic-geometry-of-dmt-experiences/
June 2017 talk at Consciousness Hacking, describing a theory and experiment to predict people’s valence from fMRI data:
https://qualiacomputing.com/2017/06/18/quantifying-bliss-talk-summary/
A parametrization of various psychedelic states as operators in qualia space:
https://qualiacomputing.com/2016/06/20/algorithmic-reduction-of-psychedelic-states/
A brief post on valence and the fundamental attribution error:
https://qualiacomputing.com/2016/11/19/the-tyranny-of-the-intentional-object/
A summary of some of Selen Atasoy’s current work on Connectome Harmonics:
https://qualiacomputing.com/2017/06/18/connectome-specific-harmonic-waves-on-lsd/

The Most Important Philosophical Question

Albert Camus famously claimed that the most important philosophical question in existence was whether to commit suicide. I would disagree.

For one, if Open Individualism is true (i.e. that deep down we are all one and the same consciousness) then ending one’s life will not accomplish much. The vast majority of “who you are” will remain intact, and if there are further problems to be solved, and questions to be answered, doing this will simply delay your own progress. So at least from a certain point of view one could argue that the most important question is, instead, the question of personal identity. I.e. Are you, deep down, an individual being who starts existing when you are born and stops existing when you die (Closed Individualism), something that exists only for a single time-slice (Empty Individualism), or maybe something that is one and the same with the rest of the universe (Open Individualism)?

I think that is a very important question. But probably not the most important one. Instead, I’d posit that the most important question is: “What is good, and is there a ground truth about it?”

In the case that we are all one consciousness maybe what’s truly good is whatever one actually truly values from a first-person point of view (being mindful, of course, of the deceptive potential that comes from the Tyranny of the Intentional Object). And in so far as this has been asked, I think that there are two remaining possibilities: Does ultimate value come down to the pleasure-pain axis, or does it come down to spiritual wisdom?

Thus, in this day and age, I’d argue that the most important philosophical (and hence most important, period) question is: “Is happiness a spiritual trick, or is spirituality a happiness trick?”

What would it mean for happiness to be a spiritual trick? Think, for example, of the possibility that the reason why we exist is because we are all God, and God would be awfully bored if It knew that It was all that ever existed. In such a case, maybe bliss and happiness comes down to something akin to “Does this particular set of life experiences make God feel less lonely”? Alternatively, maybe God is “divinely self-sufficient”, as some mystics claim, and all of creation is “merely a plus on top of God”. In this case one could think that God is the ultimate source of all that is good, and thus bliss may be synonymous with “being closer to God”. In turn, as mystics have claimed over the ages, the whole point of life is to “get closer to God”.

Spirituality, though, goes beyond God: Within (atheistic) Buddhism the view that “bliss is a spiritual trick” might take another form: Bliss is either “dirty and a sign of ignorance” (as in the case of karma-generating pleasure) or it is “the results of virtuous merit conducive to true unconditioned enlightenment“. Thus, the whole point of life would be to become free from ignorance and reap the benefits of knowing the ultimate truth.

And what would it mean for spirituality to be a happiness trick? In this case one could imagine that our valence (i.e. our pleasure-pain axis) is a sort of qualia variety that evolution recruited in order to infuse the phenomenal representation of situations that predict either higher or lower chances of making copies of oneself (or spreading one’s genes, in the more general case of “inclusive fitness”). If this is so, it might be tempting to think that bliss is, ultimately, not something that “truly matters”. But this would be to think that bliss is “nothing other than the function that bliss plays in animal behavior”, which couldn’t be further from the truth. After all, the same behavior could be enacted by many methods. Instead, the raw phenomenal character of bliss reveals that “something matters in this universe”. Only people who are anhedonic (or are depressed) will miss the fact that “bliss matters”. This is self-evident and self-intimating to anyone currently experiencing ecstatic rapture. In light of these experiences we can conclude that if anything at all does matter, it has to do with the qualia varieties involved in the experiences that feel like the world has meaning. The pleasure-pain axis makes our existence significant.

Now, why do I think this is the most important question? IF we discover that happiness is a spiritual trick and that God is its source then we really ought to follow “the spiritual path” and figure out with science “what is it that God truly wants”. And under an atheistic brand of spirituality, what we ought to figure out is the laws of valence-charged spiritual energy. For example, if reincarnation and karma are involved in the expected amount of future bliss and suffering, so be it. Let’s all become Bodhisattvas and help as many sentient beings as possible throughout the eons to come.

On the other hand, IF we discover (and can prove with a good empirical argument) that spirituality is just the result of changes in valence/happiness, then settling on this with a high certainty would change the world. For starters, any compassionate (and at least mildly rational) Buddhist would then come along and help us out in the pursuit of creating a pan-species welfare state free of suffering with the use of biotechnology. I.e. The 500 odd million Buddhists world-wide would be key allies for the Hedonistic Imperative (a movement that aims to eliminate suffering with biotechnology).

Recall Dalai Lama’s quote: “If it was possible to become free of negative emotions by a riskless implementation of an electrode – without impairing intelligence and the critical mind – I would be the first patient.” [Dalai Lama (Society for Neuroscience Congress, Nov. 2005)].

If Buddhist doctrine concerning the very nature of suffering and its causes is wrong from a scientific point of view and we can prove it with an empirically verified physicalist paradigm, then the very Buddhist ethic of “focusing on minimizing suffering” ought to compel Buddhists throughout the world to join us in the battle against suffering by any means necessary. And most likely, given the physicalist premise, this would take the form of creating a technology that puts us all in a perpetual pro-social clear-headed non-addictive MDMA-like state of consciousness (or, in a more sophisticated vein, a well-balanced version of rational wire-heading).

Political Peacocks

Extract from Geoffrey Miller’s essay “Political peacocks”

The hypothesis

Humans are ideological animals. We show strong motivations and incredible capacities to learn, create, recombine, and disseminate ideas. Despite the evidence that these idea-processing systems are complex biological adaptations that must have evolved through Darwinian selection, even the most ardent modern Darwinians such as Stephen Jay Gould, Richards Dawkins, and Dan Dennett tend to treat culture as an evolutionary arena separate from biology. One reason for this failure of nerve is that it is so difficult to think of any form of natural selection that would favor such extreme, costly, and obsessive ideological behavior. Until the last 40,000 years of human evolution, the pace of technological and social change was so slow that it’s hard to believe there was much of a survival payoff to becoming such an ideological animal. My hypothesis, developed in a long Ph.D. dissertation, several recent papers, and a forthcoming book, is that the payoffs to ideological behavior were largely reproductive. The heritable mental capacities that underpin human language, culture, music, art, and myth-making evolved through sexual selection operating on both men and women, through mutual mate choice. Whatever technological benefits those capacities happen to have produced in recent centuries are unanticipated side-effects of adaptations originally designed for courtship.

[…]

The predictions and implications

The vast majority of people in modern societies have almost no political power, yet have strong political convictions that they broadcast insistently, frequently, and loudly when social conditions are right. This behavior is puzzling to economists, who see clear time and energy costs to ideological behavior, but little political benefit to the individual. My point is that the individual benefits of expressing political ideology are usually not political at all, but social and sexual. As such, political ideology is under strong social and sexual constraints that make little sense to political theorists and policy experts. This simple idea may solve a number of old puzzles in political psychology. Why do hundreds of questionnaires show that men more conservative, more authoritarian, more rights-oriented, and less empathy-oriented than women? Why do people become more conservative as the move from young adulthood to middle age? Why do more men than women run for political office? Why are most ideological revolutions initiated by young single men?

None of these phenomena make sense if political ideology is a rational reflection of political self-interest. In political, economic, and psychological terms, everyone has equally strong self-interests, so everyone should produce equal amounts of ideological behavior, if that behavior functions to advance political self-interest. However, we know from sexual selection theory that not everyone has equally strong reproductive interests. Males have much more to gain from each act of intercourse than females, because, by definition, they invest less in each gamete. Young males should be especially risk-seeking in their reproductive behavior, because they have the most to win and the least to lose from risky courtship behavior (such as becoming a political revolutionary). These predictions are obvious to any sexual selection theorist. Less obvious are the ways in which political ideology is used to advertise different aspects of one’s personality across the lifespan.

In unpublished studies I ran at Stanford University with Felicia Pratto, we found that university students tend to treat each others’ political orientations as proxies for personality traits. Conservatism is simply read off as indicating an ambitious, self-interested personality who will excel at protecting and provisioning his or her mate. Liberalism is read as indicating a caring, empathetic personality who will excel at child care and relationship-building. Given the well-documented, cross-culturally universal sex difference in human mate choice criteria, with men favoring younger, fertile women, and women favoring older, higher-status, richer men, the expression of more liberal ideologies by women and more conservative ideologies by men is not surprising. Men use political conservatism to (unconsciously) advertise their likely social and economic dominance; women use political liberalism to advertise their nurturing abilities. The shift from liberal youth to conservative middle age reflects a mating-relevant increase in social dominance and earnings power, not just a rational shift in one’s self-interest.

More subtley, because mating is a social game in which the attractiveness of a behavior depends on how many other people are already producing that behavior, political ideology evolves under the unstable dynamics of game theory, not as a process of simple optimization given a set of self-interests. This explains why an entire student body at an American university can suddenly act as if they care deeply about the political fate of a country that they virtually ignored the year before. The courtship arena simply shifted, capriciously, from one political issue to another, but once a sufficient number of students decided that attitudes towards apartheid were the acid test for whether one’s heart was in the right place, it became impossible for anyone else to be apathetic about apartheid. This is called frequency-dependent selection in biology, and it is a hallmark of sexual selection processes.

What can policy analysts do, if most people treat political ideas as courtship displays that reveal the proponent’s personality traits, rather than as rational suggestions for improving the world? The pragmatic, not to say cynical, solution is to work with the evolved grain of the human mind by recognizing that people respond to policy ideas first as big-brained, idea-infested, hyper-sexual primates, and only secondly as concerned citizens in a modern polity. This view will not surprise political pollsters, spin doctors, and speech writers, who make their daily living by exploiting our lust for ideology, but it may surprise social scientists who take a more rationalistic view of human nature. Fortunately, sexual selection was not the only force to shape our minds. Other forms of social selection such as kin selection, reciprocal altruism, and even group selection seem to have favoured some instincts for political rationality and consensual egalitarianism. Without the sexual selection, we would never have become such colourful ideological animals. But without the other forms of social selection, we would have little hope of bringing our sexily protean ideologies into congruence with reality.