Thoughts on the ‘Is-Ought Problem’ from a Qualia Realist Point of View

tl;dr If we construct a theory of meaning grounded in qualia and felt-sense, it is possible to congruently arrive at “should” statements on the basis of reason and “is” claims. Meaning grounded in qualia allows us to import the pleasure-pain axis and its phenomenal character to the same plane of discussion as factual and structural observations.

Introduction

The Is-Ought problem (also called “Hume’s guillotine”) is a classical philosophical conundrum. On the one hand people feel that our ethical obligations (at least the uncontroversial ones like “do not torture anyone for no reason”) are facts about reality in some important sense, but on the other hand, rigorously deriving such “moral facts” from facts about the universe appears to be a category error. Is there any physical fact that truly compels us to act in one way or another?

A friend recently asked about my thoughts on this question and I took the time to express them to the best of my knowledge.

Takeaways

I provide seven points of discussion that together can be used to make the case that “ought” judgements often, though not always, are on the same ontological footing as “is” claims. Namely, that they are references to the structure and quality of experience, whose ultimate nature is self-intimating (i.e. it reveals itself) and hence inaccessible to those who lack the physiological apparatus to instantiate it. In turn, we could say that within communities of beings who share the same self-intimating qualities of experience, the is/ought divide may not be completely unbridgeable.


Summaries of Question and Response

Summary of the question:

How does a “should” emerge at all? How can reason and/or principles and/or logic compel us to follow some moral code?

Summary of the response:

  1. If “ought” statements are to be part of our worldview, then they must refer to decisions about experiences: what kinds of experiences are better/worse, what experiences should or should not exist, etc.
  2. A shared sense of personal identity (e.g. Open Individualism – which posits that “we are all one consciousness”) allows us to make parallels between the quality of our experience and the experience of others. Hence if one grounds “oughts” on the self-intimating quality of one’s suffering, then we can also extrapolate that such “oughts” must exist in the experience of other sentient beings and that they are no less real “over there” simply because a different brain is generating them (general relativity shows that every “here and now” is equally real).
  3. Reduction cuts both ways: if the “fire in the equations of physics” can feel a certain way (e.g. bliss/pain) then objective causal descriptions of reality (about e.g. brain states) are implicitly referring to precisely that which has an “ought” quality. Thus physics may be inextricably connected with moral “oughts”.
  4. If one loses sight of the fact that one’s experience is the ultimate referent for meaning, it is possible to end up in nihilistic accounts of meaning (e.g. such as Quine’s Indeterminacy of translation and Dennett’s inclusion of qualia within that framework). But if one grounds meaning in qualia, then suddenly both causality and value are on the same ontological footing (cf. Valence Realism).
  5. To see clearly the nature of value it is best to examine it at its extremes (such as MDMA bliss vs. the pain of kidney stones). Having such experiences illuminates the “ought” aspect of consciousness, in contrast to the typical quasi-anhedonic “normal everyday states of consciousness” that most people (and philosophers!) tend to reason from. It would be interesting to see philosophers discuss e.g. the Is-Ought problem while on MDMA.
  6. Claims that “pleasure and pain, value and disvalue, good and bad, etc.” are an illusion by long-term meditators based on the experience of “dissolving value” in meditative states are no more valid than claims that pain is an illusion by someone doped on morphine. In brief: such claims are made in a state of consciousness that has lost touch with the actual quality of experience that gives (dis)value to consciousness.
  7. Admittedly the idea that one state of consciousness can even refer to (let alone make value judgements about) other states of consciousness is very problematic. In what sense does “reference” even make sense? Every moment of experience only has access to its own content. We posit that this problem is not ultimately unsolvable, and that human concepts are currently mere prototypes of a much better future set of varieties of consciousness optimized for truth-finding. As a thought experiment to illustrate this possible future, consider a full-spectrum superintelligence capable of instantiating arbitrary modes of experience and impartially comparing them side by side in order to build a total order of consciousness.

Full Question and Response

Question:

I realized I don’t share some fundamental assumptions that seemed common amongst the people here [referring to the Qualia Research Institute and friends].

The most basic way I know how to phrase it, is the notion that there’s some appeal to reason and/or principles and/or logic that compels us to follow some type of moral code.

A (possibly straw-man) instance is the notion I associate with effective altruism, namely, that one should choose a career based on its calculable contribution to human welfare. The assumption is that human welfare is what we “should” care about. Why should we? What’s compelling about trying to reconfigure ourselves from whatever we value at the moment to replacing that thing with human welfare (or anything else)? What makes us think we can even truly succeed in reconfiguring ourselves like this? The obvious pitfall seems to be we create some image of “goodness” that we try to live up to without ever being honest with ourselves and owning our authentic desires. IMO this issue is rampant in mainstream Christianity.

More generally, I don’t understand how a “should” emerges within moral philosophy at all. I understand how starting with a want, say happiness, and noting a general tendency, such as I become happy when I help others, that one could deduce that helping others often is likely to result in a happy life. I might even say “I should help others” to myself, knowing it’s a strategy to get what I want. That’s not the type of “should” I’m talking about. What I’m talking about is “should” at the most basic level of one’s value structure. I don’t understand how any amount of reasoning could tell us what our most basic values and desires “should” be.

I would like to read something rigorous on this issue. I appreciate any references, as well as any elucidating replies.

Response:

This is a very important topic. I think it is great that you raise this question, as it stands at the core of many debates and arguments about ethics and morality. I think that one can indeed make a really strong case for the view that “ought” is simply never logically implied by any accurate and objective description of the world (the famous is/ought Humean guillotine). I understand that an objective assessment of all that is will usually be cast as a network of causal and structural relationships. By starting out with a network of causal and structural relationships and using logical inferences to arrive at further high-level facts, one is ultimately bound to arrive at conclusions that themselves are just structural and causal relationships. So where does the “ought” fit in here? Is it really just a manner of speaking? A linguistic spandrel that emerges from evolutionary history? It could really seem like it, and I admit that I do not have a silver bullet argument against this view.

However, I do think that eventually we will arrive at a post-Galilean understanding of consciousness, and that this understanding will itself allow us to point out exactly where- if at all- ethical imperatives are located and how they emerge. For now all I have is a series of observations that I hope can help you develop an intuition for how we are thinking about it, and why our take is original and novel (and not simply a rehashing of previous arguments or appeals to nature/intuition/guilt).

So without further ado I would like to lay out the following points on the table:

  1. I am of the mind that if any kind of “ought” is present in reality it will involve decision-making about the quality of consciousness of subjects of experience. I do not think that it makes sense to talk about an ethical imperative that has anything to do with non-experiential properties of the universe precisely because there would be no one affected by it. If there is an argument for caring about things that have no impact on any state of consciousness, I have yet to encounter it. So I will assume that the question refers to whether certain states of consciousness ought to or ought not to exist (and how to make trade offs between them).
  2. I also think that personal identity is key for this discussion, but why this is the case will make sense in a moment. The short answer is that conscious value is self-intimating/self-revealing, and in order to pass judgement on something that you yourself (as a narrative being) will not get to experience you need some confidence (or reasonable cause) to believe that the same self-intimating quality of experience is present in other narrative orbits that will not interact with you. For the same reasons as (1) above, it makes no sense to care about philosophical zombies (no matter how much they scream at you), but the same is the case for “conscious value p. zombies” (where maybe they experience color qualia but do not experience hedonic tone i.e. they can’t suffer).
  3. A very important concept that comes up again and again in our research is the notion that “reduction cuts both ways”. We take dual aspect monism seriously, and in this view we would consider the mathematical description of an experience and its qualia two sides of the same coin. Now, many people come here and say “the moment you reduce an experience of bliss to a mathematical equation you have removed any fuzzy morality from it and arrived at a purely objective and factual account which does not support an ‘ought ontology'”. But doing this mental move requires you to take the mathematical account as a superior ontology to that of the self-intimating quality of experience. In our view, these are two sides of the same coin. If mystical experiences are just a bunch of chemicals, then a bunch of chemicals can also be a mystical experience. To reiterate: reduction cuts both ways, and this happens with the value of experience to the same extent as it happens with the qualia of e.g. red or cinnamon.
  4. Mike Johnson tends to bring up Wittgenstein and Quine to the “Is-Ought” problem because they are famous for ‘reducing language and meaning’ to games and networks of relationships. But here you should realize that you can apply the concept developed in (3) above just as well to this matter. In our view, a view of language that has “words and objects” at its foundation is not a complete ontology, and nor is one that merely introduces language games to dissolve the mystery of meaning. What’s missing here is “felt sense” – the raw way in which concepts feel and operate on each other whether or not they are verbalized. It is my view that here phenomenal binding becomes critical because a felt sense that corresponds to a word, concept, referent, etc. in itself encapsulates a large amount of information simultaneously, and contains many invariants across a set of possible mental transformations that define what it is and what it is not. More so, felt senses are computationally powerful (rather than merely epiphenomenal). Consider Daniel Tammet‘s mathematical feats achieved by experiencing numbers in complex synesthetic ways that interact with each other in ways that are isomorphic to multiplication, factorization, etc. More so, he does this at competitive speeds. Language, in a sense, could be thought of as the surface of felt sense. Daniel Dennett famously argued that you can “Quine Qualia” (meaning that you can explain it away with a groundless network of relationships and referents). We, on the opposite extreme, would bite the bullet of meaning and say that meaning itself is grounded in felt-sense and qualia. Thus, colors, aromas, emotions, and thoughts, rather than being ultimately semantically groundless as Dennett would have it, turn out to be the very foundation of meaning.
  5. In light of the above, let’s consider some experiences that embody the strongest degree of the felt sense of “ought to be” and “ought not to be” that we know of. On the negative side, we have things like cluster headaches and kidney stones. On the positive side we have things like Samadhi, MDMA, and 5-MEO-DMT states of consciousness. I am personally more certain that the “ought not to be” aspect of experience is more real than the “ought to be” aspect of it, which is why I have a tendency (though no strong commitment) towards negative utilitarianism. When you touch a hot stove you get this involuntary reaction and associated valence qualia of “reality needs you to recoil from this”, and in such cases one has degrees of freedom into which to back off. But when experiencing cluster headaches and kidney stones, this sensation- that self-intimating felt-sense of ‘this ought not to be’- is omnidirectional. The experience is one in which one feels like every direction is negative, and in turn, at its extremes, one feels spiritually violated (“a major ethical emergency” is how a sufferer of cluster headaches recently described it to me). This brings me to…
  6. The apparent illusory nature of value in light of meditative deconstruction of felt-senses. As you put it elsewhere: “Introspectively – Meditators with deep experience typically report all concepts are delusion. This is realized in a very direct experiential way.” Here I am ambivalent, though my default response is to make sense of the meditation-induced feeling that “value is illusory” as itself an operation on one’s conscious topology that makes the value quality of experience get diminished or plugged out. Meditation masters will say things like “if you observe the pain very carefully, if you slice it into 30 tiny fragments per second, you will realize that the suffering you experience from it is an illusory construction”. And this kind of language itself is, IMO, liable to give off the illusion that the pain was illusory to begin with. But here I disagree. We don’t say that people who take a strong opioid to reduce acute pain are “gaining insight into the fundamental nature of pain” and that’s “why they stop experiencing it”. Rather, we understand that the strong opioid changes the neurological conditions in such a way that the quality of the pain itself is modified, which results in a duller, “asymbolic“, non-propagating, well-confined discomfort. In other words, strong opioids reduce the value-quality of pain by locally changing the nature of pain rather than by bringing about a realization of its ultimate nature. The same with meditation. The strongest difference here, I think, would be that opioids are preventing the spatial propagation of pain “symmetry breaking structures” across one’s experience and thus “confine pain to a small spatial location”, whereas meditation does something different that is better described as confining the pain to a small temporal region. This is hard to explain in full, and it will require us to fully formalize how the subjective arrow of time is constructed and how pain qualia can make copies across it. [By noting the pain very quickly one is, I believe, preventing it from building up and then having “secondary pain” which emerges from the cymatic resonance of the various lingering echoes of pain across one’s entire “pseudo-time arrow of experience”.] Sorry if this sounds like word salad, I am happy to unpack these concepts if needed, while also admitting that we are in early stages of the theoretical and empirical development.
  7. Finally, I will concede that the common sense view of “reference” is very deluded on many levels. The very notion that we can refer to an experience with another experience, that we can encode the properties of a different moment of experience in one’s current moment of experience, that we can talk about the “real world” or its “objective ethical values” or “moral duty” is very far from sensical in the final analysis. Reference is very tricky, and I think that a full understanding of consciousness will do some severe violence to our common sense in this area. That, however, is different from the self-disclosing properties of experience such as red qualia and pain qualia. You can do away with all of common sense reference while retaining a grounded understanding that “the constituents of the world are qualia values and their local binding relationships”. In turn, I do think that we can aim to do a decently good job at re-building from the ground up a good approximation of our common sense understanding of the world using “meaning grounded in qualia”, and once we do that we will be in a solid foundation (as opposed to the, admittedly very messy, quasi-delusional character of thoughts as they exist today). Needless to say, this may also need us to change our state of consciousness. “Someday we will have thoughts like sunsets” – David Pearce.

 

Burning Man 2.0: The Eigen-Schelling Religion, Entrainment & Metronomes, and the Eternal Battle Between Consciousness and Replicators

Because our consensus reality programs us in certain destructive directions, we must experience other realities in order to know we have choices.

Anyone who limits her vision to memories of yesterday is already dead.

Lillie Langtry

Last year I wrote a 13,000 word essay about my experience at Burning Man. This year I will also share some thoughts and insights concerning my experience while being brief and limiting myself to seven thousand words. I decided to write this piece stand-alone in such a way that you do not need to have read the previous essay in order to make sense of the present text.


Camp Soft Landing

I have been wanting to attend Burning Man for several years, but last year was the first time I had both the time and resources to do so. Unfortunately I was not able to get a ticket in the main sale, so I thought I would have to wait another year to have the experience. Out of the blue, however, I received an email from someone from Camp Soft Landing asking me if I would be interested in giving a talk at Burning Man in their Palenque Norte speaker series. My immediate response was “I would love to! But I don’t have a ticket and I don’t have a camp.” The message I received in return was “Great! Well, we have extra tickets, and you can stay at our camp.” So just like that I suddenly had the opportunity to not only attend, but also be at a wonderful camp and give a talk about consciousness research.

Full Circle Teahouse

The camp I’ve been a part of turned out to be an extremely good fit for me both as a researcher and as a person. Camp Soft Landing is one of the largest camps at Burning Man, featuring a total of 150 participants every year. Its two main contributions to the playa are the Full Circle Teahouse and Palenque Norte. The Full Circle Teahouse is a place in which we serve adaptogen herbal tea blends and Pu’er tea in a peaceful setting that emphasizes presence, empathy, and listening. It’s also full of pillows and cozy blankets and serves as a place for people who are overwhelmed to calm down or crash after a hectic night. (During training we were advised to expect that some people “may not know where they are or how they got here when they wake up in the early morning” and to “help them get oriented and offer them tea”). Here are a few telling words by the Teahouse founder Annie Oak:

The real secret sauce to our camp’s collective survival has been our focus on the well being of everyone who steps inside Soft Landing. While the ancestral progenitor who occupied our location before us, Camp Above the Limit, ran a lively bar, we made a decision not to serve alcohol in our camp. I enjoy an occasional cocktail, but I believe that the conflating of the gift economy with free alcohol has compromised the public health and social cohesion of Black Rock City. We do not prohibit alcohol at Soft Landing, but we do not permit bars inside our camp. Instead, we run a tea bar at our Tea House for those seeking a place to rest, hydrate and receive compassionate care. We also give away hundreds of gallons of water to Tea House visitors. We don’t want to undermine their self-sufficiency, but we can proactively reduce the number of guests who become ill from dehydration. We keep our Tea House open until Monday after the Burn to help weary people stay alert on the perilous drive back home.

– Doing It Right: Theme Camp Management Insights from Camp Soft Landing

Palenque Norte

Palenque Norte is a speaker series founded by podcaster Lorenzo Hagerty in 2003 (cf. A Brief History of Palenque Norte). A friend described it as “TED for Psychedelic Research at Burning Man” which is pretty accurate. Indeed, looking at a list of Palenque Norte speakers is like browsing a who’s who of the scientific and artistic psychedelic community: Johns Hopkins‘ Roland GriffithsMAPS‘ Rick DoblinHeffter‘s George GreerEFF‘s John GilmoreAnn & Sasha Shulgin (Q&A), DanceSafe‘s Mitchell Gomez, Consciousness Hacking‘ Mikey SiegelPaul DaleyBruce Damer, Will Siu, Emily WilliamsSebastian Job, Alex Grey, Android Jones, and many others. For reference, here was this year’s Palenque Norte schedule:

Thanks to the Full Circle Teahouse and Palenque Norte, the social and memetic composition of Camp Soft Landing is one that is characterized by a mixture of veteran scientists and community builders in their 50s and 60s, science and engineering nerds with advanced degrees in their late 20s and early 30s, and a dash of millennials and Gen-Z-ers in the rationalist/Effective Altruist communities.

lorenzo-sasha-bruce

Lorenzo Hagerty, Sasha Shulgin, and Bruce Damer (Burning Man, Palenque Norte c. 2007)

The people of Camp Soft Landing are near and dear to my heart given that they take consciousness seriously, they have a scientific focus, and they emit a strong intellectual vibe. As a budding qualia researcher myself, I feel completely at home there. As it turns out, this type of vibe is not at all out of place at Burning Man…

Burning Man Attendees

I would hazard the guess that Burning Man attendees are on average much more open to experience, conscientious, cognitively oriented, and psychologically robust than people in the general population. In particular, the combination of conscientiousness and openness to experience is golden. These are people who are not only able to think of crazy ideas, but who are also diligent enough to manifest them in the real world in concrete forms. This may account for the high production value and elaborate nature of the art, music, workshops, and collective activities. While the openness to experience aspect of Burning Man is fairly self-evident (it jumps at you if you do a quick google images search), the conscientiousness aspect may be a little harder to believe. Here I will quote a friend to illustrate this component:

Burning Man is the annual meeting of the recreational logistics community. Or maybe it’s a job interview for CEO: how to deal with broken situations and unexpected constraints in a multi-agent setting, just to survive.

[…]

Things I learned / practiced in the last couple of weeks: truck driving, clever packing, impact driver, attaching bike trailer, pumping gas and filling generators, knots, adding hanging knobs to a whiteboard, tying things with wire, quickly moving tents on the last night, finding rides, using ratchet straps, opening & closing storage container, driving to Treasure Island.

GL

Indeed this may be one of the key barriers of entry that defines the culture of Burning Man and explains why the crazy ideas people have in a given year tend to come back in the form of art in the next year… rather than vanishing into thin air.

There are other key features of the people who attend which can be seen by inspecting the Burning Man Census report. Here is a list of attributes, their baserate for Burners, and the baserate in the general population (for comparison): Having an undergraduate degree (73.6% vs. 32%), holding a graduate degree (31% vs. 10%), being gay/lesbian (8.5% vs. 1.3%), bisexual (10% vs. 1.8%), bicurious (11% vs. ??), polyamorous (20% vs. 5%), mixed race (9% vs. 3%), female (40% vs. 50%), median income (62K vs. 30K), etc.

From a bird’s eye view one can describe Burners as much more: educated, LGBT, liberal or libertarian, “spiritual but not religious”, and more mixed race than the average person. There are many more interesting cultural and demographic attributes that define the population of Black Rock City, but I will leave it at that for now for the sake of brevity. That said, feel free to inspect the following Census graphs for further details:

This slideshow requires JavaScript.


Last year at Burning Man I developed a cluster of new concepts including “The Goldilocks Zone of Oneness” and “Hybrid Vigor in the context of post-Darwinian ethics.” I included my conversation with God and instructions for a guided oneness meditation. This year I continued to use the expanded awareness field of the Playa to further these and other concepts. In what follows I will describe some of the main ideas I experienced and then conclude with a summary of the talk I gave at Palenque Norte. If any of the following sections are too dense or uninteresting please feel free to skip them.

The Universal Eigen-Schelling Religion

On one of the nights a group of friends and I went on a journey following an art car, stopping every now and then to dance and to check out some art. At one point we drove through a large crowd of people and by the time the art car was on the other side, a few people from the group were missing. The question then became “what do we do?” We didn’t agree on a strategy for dealing with this situation before we embarked on the trip. After a couple of minutes we all converged on a strategy: stay near the art car and drive around until we find the missing people. The whole situation had a “lost in space” quality. Finding individual people is very hard since from a distance everyone is wearing roughly-indistinguishable multi-colored blinking LEDs all over their body. But since art cars are large and more distinguishable at a distance, they become natural Schelling points for people to converge on. Schelling points are a natural coordination mechanism in the absence of direct communication channels.

We were thus able to re-group almost in our entirety as a group (with only one person missing, who we finally had to give up on) by independently converging on the meta-heuristic of looking for the most natural Schelling point and finding the rest of the group there. For the rest of the night I kept thinking about how this meta-strategy may play out in the grand scheme of things.

If you follow Qualia Computing you may know that our default view on the nature of ethics is valence utilitarianism. People think they want specific things (e.g. ice-cream, a house, to be rich and famous, etc.) but in reality what they want is the high-valence response (i.e. happiness, bliss, and pleasure) that is triggered by such stimuli. When two people disagree on e.g. whether a certain food is tasty, they are not usually talking about the same experience. For one person, such food could induce high degrees of sensory euphoria, while for the other person, the food may leave them cold. But if they had introspective access to each other’s valence response, the disagreement would vanish (“Ah, I didn’t realize mayo produced such a good feeling for you. I was fixated on the aversive reaction I had to it.”). In other words, disagreements about the value of specific stimuli come down to lack of empathetic fidelity between people rather than a fundamental value mismatch. Deep down, we claim, we all like the same states of consciousness, and our disagreements come from the fact that their triggers vary between people. We call the fixation on the stimuli rather than the valence response the Tyranny of the Intentional Object.

In the grand scheme of things, we posit that advanced intelligences across the multiverse will generally converge on valence realism and valence utilitarianism. This is not an arbitrary value choice; it’s the natural outcome of looking for consistency among one’s disparate preferences and trying to investigate the true nature of conscious value. Insofar as curiosity is evolutionarily adaptive, any sufficiently general and sufficiently curious conscious mind eventually reaches the conclusion that value is a structural feature of conscious states and sheds the illusion of intentionality and closed identity. And while in the context of human history one could point at specific philosophers and scientists that have advanced our understanding of ethics (i.e. Plato, Bentham, Singer, Pearce, etc.) there may be a very abstract but universal way of describing the general tendency of curious conscious intelligences towards valence utilitarianism. It would go like this:

In a physicalist panpsychist paradigm, the vast majority of moments of experience do not occur within intelligent minds and leave no records of their phenomenal character for future minds to examine and inspect. A subset of moments of experience, though, do happen to take place within intelligent minds. We can call these conscious eigen-states because their introspective value can be retroactively investigated and compared against the present moment of experience, which has access to records of past experiences. Humans, insofar as they do not experience large amounts of amnesia, are able to experience a wide range of eigen-states throughout their lives. Thus, within a single human mind, many comparisons between the valence of various states of consciousness can be carried out (this is complicated and not always feasible given the state-dependence of memory). Either way, one could visualize how the information about the relative ranking of experiences is gathered across a Directed Acyclic Graph (DAG) of moments of experience that have partial introspective access to previous moments of experience. Furthermore, if the assumption of continuity of identity is made (i.e. that each moment of experience is witnessed by the same transcendental subject) then each evaluation between pairs of states of consciousness contributes a noisy datapoint to a universal ranking of all experiences and values.

After enough comparisons, a threshold number of evaluated experiences may be crossed, at which point a general theory of value can begin to be constructed. Thus a series of natural Schelling points for “what is universally valuable” become accessible to subsequent moments of experience. One of these focal points is the prevention of suffering throughout the entire multiverse. That is, to avoid experiences that do not like existing, independently of their location in space-time. Likewise, we would see another focal point that adds an imperative to realize experiences that value their own existence (“let the thought forms who love themselves reproduce and populate the multiverse”).

I call this approach to ethics the Eigen-Schelling Religion. Any sapient mind in the multiverse with a general enough ability to reason about qualia and reflect about causality is capable of converging to it. In turn, we can see that many concepts at the core of world religions are built around universal Eigen-Schelling points. Thus, we can rest assured that both the Bodhisattva imperative to eliminate suffering and the Christ “world redeeming” sentiment are reflections of a fundamental converging process to which many other intelligent life-forms have access across the entire multiverse. What I like about this framework is that you don’t need to take anyone’s word for what constitutes wisdom in consciousness. It naturally exists as reflective focal points within the state-space of consciousness itself in a way that transcends time and space.

Entrainment and Metronomes

In A Future for Neuroscience my friend and colleague Mike E. Johnson from the Qualia Research Institute explored how taking seriously the paradigm of Connectome-Specific Harmonic Waves (CSHW) leads us to reinterpret cognitive and personality traits in an entirely new light. In particular, here is what he has to say about emotional intelligence:

EQ (emotional intelligent quotient) isn’t very good as a formal psychological construct- it’s not particularly predictive, nor very robust when viewed from different perspectives. But there’s clearly something there– empirically, we see that some people are more ‘tuned in’ to the emotional & interpersonal realm, more skilled at feeling the energy of the room, more adept at making others feel comfortable, better at inspiring people to belief and action. It would be nice to have some sort of metric here.

I suggest breaking EQ into entrainment quotient (EnQ) and metronome quotient (MQ). In short, entrainment quotient indicates how easily you can reach entrainment with another person. And by “reach entrainment”, I mean how rapidly and deeply your connectome harmonic dynamics can fall into alignment with another’s. Metronome quotient, on the other hand, indicates how strongly you can create, maintain, and project an emotional frame. In other words, how robustly can you signal your internal connectome harmonic state, and how effectively can you cause others to be entrained to it. […] Most likely, these are reasonably positively correlated; in particular, I suspect having a high MQ requires a reasonably decent EnQ. And importantly, we can likely find good ways to evaluate these with CSHW.

This conceptual framework can be useful for making sense of the novel social dynamics that take place in Black Rock City. In particular, as illustrated by the Census responses, most participants are in a very open and emotionally receptive state at Burning Man:

One could say that by feeling safe, welcomed, and accepted at Burning Man, attendees adopt a very high Entrainment Quotient modus operandi. In tandem, we then see large art pieces, art cars, theme camps, and powerful sound systems blasting their unique distinctive emotional signals throughout the Playa. In a sense the entire place looks like an ecosystem of brightly-lit high-energy metronomes trying to attract the attention of a swarm of people in highly open and sensitive states with the potential to be entrained with these metronomes. Since the competition for attention is ferocious, there is not a single metronome that can dominate or totally brainwash you. All it takes for you to get a bad signal out of your head is to walk 50 meters to another place where the vibe will be, in all likelihood, completely different and overwrite the previous state.

This dynamic reaches its ultimate climax the very night of the Burn, as (almost) everyone gathers around the Man in a maximally receptive state, while at the same time every art car and group vibe surrounds the crowd and blasts their unique signals as loud and as intensely as possible all at the same time. This leads to the reification of the collective Burning Man egregore, which manifests as the sum total of all signals and vibes in mass ecstasy.

41166785_1829800053770847_7677685032978219008_o

Night of the Burn (source)

It is worth pointing out that not all of the metronomes in the Playa are created equal. Some art cars, for example, send highly specific and culturally-bound signals (e.g. country music, Simon & Garfunkel, Michael Jackson, etc.). While these metronomes will have their specific followings (i.e. you can always find a group of dedicated Pink Floyd fans) their ability to interface with the general Burner vibe is limited by their specificity and temporal irregularity. The more typical metronomic texture you will find scattered all around the Playa will be art forms that make use of more general patternceutical Schelling points with a stronger and more general metronomic capacity. Of note is the high degree of prevalence of house music and other 110 to 140 bpm (beats per minute) music that is able to entrain your brain from a distance and motivate you to move towards it- whether or not you are able to recognize the particular song. If you listen carefully to e.g. Palenque Norte recordings you will notice the occasional art car driving by, and the music it is blasting will usually have its tempo within that range, with a strong, repeating, and easily recognizable beat structure. I suspect that this tendency is the natural emergent effect of the evolutionary selection pressures that art forms endure from one Burn to another, which benefit patterns that can captivate a lot of human attention in a competitive economy of recreational states of consciousness.

mystic_samskara

Android Jones’ Samskara at Camp Mystic 2017 (an example of the Open Individualist Schelling Vibe – i.e. the religion of the ego-dissolving LSD frequency of consciousness)

And then there are the extremely general metronome strategies that revolve around universal principles. The best example I found of this attention-capturing approach was the aesthetic of oneness, which IMO seemed to reach its highest expression at Camp Mystic:

Inspired by a sense of mystery & wonder, we perceive the consciousness of “We Are All One”. Mystics encourage the enigmatic spirit to explore a deeper connection not only on this planet and all that exists within, but the realm of the entire Universe.

Who are the Mystics? 

At their Wednesday night “White Dance Party” (where you are encouraged to dress in white) Camp Mystic was blasting the strongest vibes of Open Individualism I witnessed this year. I am of the mind that philosophy is the soul of poetry, and that massive party certainly had as its underlying philosophy the vibe of oneness and unity. This vibe is itself a Schelling point in the state-space of consciousness… the religion of the boundary-dissolving LSD frequency is not a random state, but a central hub in the super-highway of the mind. I am glad these focal points made prominent appearances at Burning Man.

Uncontrollable Feedback Loops

It is worth pointing out that at an open field as diverse as Burning Man we are likely to encounter positive feedback systems with both good and bad effects on human wellbeing. An example of a positive feedback loop with bad effects would be the incidents that transpired around the “Carkebab” art installation:

The sculpture consisted of a series of cars piled on top of each other held together by a central pole. The setup was clearly designed to be climbed given the visible handles above the cars leading to a view cart at the top. However, in practice it turned out to be considerably more dangerous and hard to climb than it seemed. Now you may anticipate the problem. If you are told that this art piece is climbable but dangerous, one can easily conjure a mental image of a future event in which someone falls and gets hurt. And as soon as that happens, access to the art installation will be restricted. Thus, one reasons that there is a limited amount of time left in which one will be able to climb the structure. Now imagine a lot of people having that train of thought. As more people realize that an accident is imminent, more people are motivated to climb it before that happens, thus creating an incentive to go as soon as possible, leading to crowding, which in turn increases the chance of an accident. The more people approach the installation, the more imminent the final point seems, and the more pressing it becomes to climb the structure before it becomes off-limits, and the more dangerous it becomes. Predictably, the imminent accident did take place. Thankfully it only involved a broken shoulder rather than something more severe. And yet, why did we let it get to that point? Perhaps in the future we should have methods to detect positive feedback loops like this and put the brakes on before it’s too late…

This leads to the topic of danger:

Counting Microlives

Can Burning Man be a place in which an abolitionist ethic can put down roots for long-term civilizational planning? Let’s briefly examine some of the potential acute, medium-term, and long-term costs of attending. Everyone has a limit, right? Some may want to think: “well, you only live once, let’s have fun”. But if you are one of the few who carries the wisdom, will, and love to move consciousness forward this should not be how you think. What would be an acceptable level of risk that an Effective Altruist should be able to accept to experience the benefits of Burning Man? I think that the critical question here is not “Is Burning Man dangerous?” but rather “How bad is it for you?”

Thankfully actuaries, modern medicine, and economists have already developed a theoretical framework for putting a number on this question. Namely, this is the concept of micromorts (i.e. 1 in a million chance of dying) and its sister concept of microlife (a cost of 1 millionth of a lifespan lost or gained by performing some activity). My preference is that of using microlives because they translate more easily into time and are, IMO, more conceptually straightforward. So here is the question: How many microlives should we be willing to spend to attend Burning Man? 10 microlives? 100 microlives? 1,000 microlives? 10,000 microlives?

Based on the fact that there are many long-term burners still alive I guesstimate that the upper bound cannot possibly be higher than 10,000 or we would know about it already. I.e. the percentage of people who get e.g. skin cancer, lung disease, or die in other ways would probably be already apparent in the community. Alternatively, it’s also possible that a reduced life expectancy as a result of attending e.g. 10+ Burns is an open secret among long-term burners… they see their friends die at an inexplicably higher rate but are too afraid to talk about it honestly. After all, people tend to be very clingy to their main sources of meaning (what we call “emotionally load-bearing activities”) so a large amount of denial can be expected in this domain.

Additionally, discussing Burning Man micromorts might be a particularly touchy and difficult subject for a number of attendees. The reason being that part of the psychological value that Burning Man provides is a felt sense of the confrontation with one’s fragility and mortality. Many older burners seem to have come to terms with their own mortality quite well already. Indeed, perhaps accepting death as part of life may be one of the very mechanisms of action for the reduction in neuroticism caused by intense experiences like psychedelics and Burning Man.

But that is not my jazz. I would personally not want to recommend an activity that costs a lot of microlives to other people in team consciousness. While I want to come to terms with death as much as your next Silicon Valley mystically-inclined nerd, I also recognize that death-acceptance is a somewhat selfish desire. Paradoxically, living a long, healthy, and productive life is one of the best ways for us to improve our chances of helping consciousness-at-large given our unwavering commitment to the eradication of all sentient suffering.

The main acute risks of Burning Man could be summarized as: dehydration, sleep deprivation, ODing (especially via accidental dosing, which is not uncommon, sadly), being run over by large vehicles (especially by art cars, trucks, and RVs), and falling from art or having art fall on you. These risks can be mitigated by the motto of “doing only one stupid thing at a time” (cf. How not to die at Burning Man). It’s ok to climb a medium-sized art piece if you are fully sober, or to take a psychedelic if you have sitters and don’t walk around art cars, etc. Most stories of accidents one hears about start along the lines of: “So, I was drunk, and high, and on mushrooms, and holding my camera, and I decided to climb on top of the thunderdome, and…”. Yes, of course that went badly. Doing stupid things on top of each other has multiplicative risk effects.

In the medium term, a pretty important risk is that of being busted by law enforcement. After all, the financial, psychological, and physiological effects of going to prison are rather severe on most people. On a similar note, a non-deadly but psychologically devastating danger of living in the desert for a week is an increased risk of kidney stones due to dehydration. The 10/10 pain you are likely to experience while passing a kidney stone may have far-reaching traumatic effects on one’s psyche and should not be underestimated (sufferers experience an increased risk of heart disease and, I would suspect, suicide).

But of all of the risks, the ones that concern me the most are the long term ones given their otherwise silent nature. In particular, we have skin cancer due to UV exposure and lung/heart disease caused by high levels of PM2.5 particles. With respect to the skin component, it is worth observing that a large majority of Burning Man attendees are caucasian and thus at a significantly higher risk. Me being a redhead, I’ve taken rather extreme precautions in this area. I apply SPF50+ sunscreen every couple of hours, use a wide-rim hat, wear arm sleeves [and gloves] for UV sun protection, wear sunglasses, stay in the shade as often as I can, etc. I recommend that other people also follow these precautions.

And with regards to dust… here I would have to say we have the largest error bars. Does Burning Man dust cause lung cancer? Does it impair lung function? Does it cause heart disease? As far as I can tell nobody knows the answer to these questions. A lot of people seem to believe that the air-borne particles are too large to pose a problem, but I highly doubt that is the case. The only source I’ve been able to find that tried to quantify dangerous particles at Burning Man comes from Camp Particle, which unfortunately does not seem to have published its results (and only provides preliminary data without the critical measure of PM2.5 I was looking for). Here are two important thoughts in this area. First, let’s hope that the clay-like alkaline composition of Playa dust turns out to be harmless to the lungs. And second, like most natural phenomena, chances are that the concentration of dangerous particles in, e.g. 1 minute buckets, follows a power law. I would strongly expect that at least 80% of the dust one inhales comes from 20% of the time in which it is most present. More so, during dust storms and especially in white-outs, I would expect the concentration of dust in the air to be at least 1,000 times higher than the median concentration. If that’s true, breathing without protection during a white-out for as little as two minutes would be equivalent to breathing in “typical conditions” without protection for more than 24 hours. In other words, being strategic and diligent about wearing a heavy and cumbersome PN100 mask may be far more effective than lazily taking on and off a more convenient (but less effective) mask throughout the day. Personally, I chose to always have on hand an M3 half facepiece with PN100 filters ready in case the dust suddenly became thicker. This did indeed save me from breathing dust during all dust storms. The difference in the quality of air while wearing it was like day and night. I will also say that while I prefer my look when I have a beard, I chose to fully shave during the event in order to guarantee a good seal with the mask. In retrospect, the fashion sacrifice does seem to be worth it, though at the time I certainly missed having a beard.

3m-half-facepiece-respirator-welding-particulate-filter-d26.jpg

The question remaining is: with a realistic amount of protection, what is the acceptable level of risk? I propose that you make up your mind before we find out with science how dangerous Burning Man actually is. In my case, I am willing to endure up to 100 negative microlives per day at Burning Man (for a total of ~800 microlives) as the absolute upper bound. Anything higher than that and the experience wouldn’t be worth it for me, and I would not recommend it to memetic allies. Thankfully, I suspect that the actual danger is lower than that, perhaps in the range of 40 negative microlives per day (mostly in the form of skin cancer and lung disease). But the problem remains that this estimate has very wide error bars. This needs to be addressed.

And if the danger does turn out to be unacceptable, then we can still look to recreate the benefits of Burning Man in a safer way: Your Legacy Could Be To Move Burning Man to a Place With A Fraction of Its Micromorts Cost.

Dangerous Bonding

In the ideal case Burning Man would be an event that triggers our brains to produce “danger signals” without there actually being much danger at all. This is because with our current brain implementation, experiencing perceived danger is helpful for bonding, trust building, and a sense of self-efficacy and survival ability.

And now on to my talk…

Andrés Gómez Emilsson – Consciousness vs. Replicators

The video above documents my talk, which includes an extended Q&A with the audience. Below is a quick summary of the main points I touched throughout the talk:

  1. Intro to Qualia Computing
    1. I started out by asking the audience if they had read any Qualia Computing articles. About 30% of them raised a hand. I then asked them how they found out about my talk, and it seems that the majority of the attendees (50%+) found it through the “What Where When” booklet. Since the majority of the people didn’t know about Qualia Computing before the talk, I decided to provide a quick introduction to some of the main concepts:
      1. What is qualia? – The raw way in which consciousness feels. Like the blueness of blue. Did you ever wonder as a kid whether other people saw the same colors as you? Qualia is that ineffable quality of experience that we currently struggle to communicate.
      2. Personal Identity:
        1. Closed Individualism – you start existing when you are born, stop existing when you die.
        2. Empty Individualism – brains are “experience machines” and you really are just a “moment of experience” disconnected from every other “moment of experience” your brain has generated or will generate.
        3. Open Individualism – we are all the “light of consciousness”. Reality has only one numerically identical subject of experience who is everyone, but which takes all sorts of forms and shapes.
        4. For the purpose of this talk I assume that Open Individualism is true, which provides a strong reason to care about the wellbeing of all sentient beings, even from a “selfish” point of view.
      3. Valence – This is the pleasure-pain axis. We take a valence realist view which means that we assume that there is an objective matter of fact about how much an experience is in pain/suffering vs. experiencing happiness/pleasure. There are pure heavenly experiences, pure hellish experiences, mixed states (e.g. enjoying music you love on awful speakers while wanting to pee), and neutral states (e.g. white noise, mild apathy, etc.).
      4. Evolutionary advantages of consciousness as part of the information processing pipeline – I pointed out that we also assume that consciousness is a real and computationally relevant phenomena. And in particular, that the reason why consciousness was recruited by natural selection to process information has to do with “phenomenal binding”. I did not go into much detail about it at the time, but if you are curious I elaborated about this during the Q&A.
  2. Spirit of our research:
    1. Exploration + Knowledge/Synthesis. Many people either over-focus on exploration (especially people very high in openness to experience) or on synthesis (like conservatives who think “the good days are gone, let’s study history”). The spirit of our research combines both open-ended exploration and strong synthesis. We encourage people to both expand their evidential base and make serious time to synthesize and cross-examine their experiences.
    2. A lot of people treat consciousness research like people used to treat alchemy. That is, they have a psychological need to “keep things magical”. We don’t. We think that consciousness research is due to transition into a hard science and that many new possibilities will be unlocked after this transition, not unlike how chemistry is thousands of times more powerful than alchemy because it allows you to create synthesis pathways from scratch using chemistry principles.
  3. How People Think and Why Few Say Meaningful Things:
    1. What most people say and talk about is a function of the surrounding social status algorithm (i.e. what kind of things award social recognition) and deep-seated evolutionarily adaptive programs (such as survival, reproductive, and affective consistency programs).
    2. Nerds and people on the autism spectrum do tend to circumvent this general mental block and are able to discuss things without being motivated by status or evolutionary programs only, instead being driven by open-ended curiosity. We encourage our collaborators to have that approach to consciousness research.
  4. What the Economy is Based on:
    1. Right now there are three main goods that are exchanged in the global economy. These are:
      1. Survival – resources that help you survive, like food, shelter, safety, etc.
      2. Power – resources that allow you to acquire social and physical power and thus increase your chances of reproducing.
      3. Consciousness – information about the state-space of consciousness. Right now people are willing to spend their “surplus” resources on experiences even if they do not increase their reproductive success. A possible dystopian scenario is one in which people do not do this anymore – everyone spends all of their available time and energy pursuing jobs for the sake of maximizing their wealth and increasing their reproductive success. This leads us to…
  5. Pure Replicators – In Wireheading Done Right we introduced the concept of a Pure ReplicatorI will define a pure replicator, in the context of agents and minds, to be an intelligence that is indifferent towards the valence of its conscious states and those of others. A pure replicator invests all of its energy and resources into surviving and reproducing, even at the cost of continuous suffering to themselves or others. Its main evolutionary advantage is that it does not need to spend any resources making the world a better place. (e.g. crystals, viruses, programs, memes, genes)
    1. It is reasonable to expect that in the absence of evolutionary selection pressures that favor the wellbeing of sentient beings, in the long run everyone alive will be playing a Pure Replicator strategy.
  6. States vs. Stages vs. Theory of Morality
    1. Ken Wilber emphasizes that there is a key difference between states and stages. Whereas states of consciousness involve various degrees of oneness and interconnectedness (from normal everyday sober experiences all the way to unity consciousness and satori), how you interpret these states will ultimately depend on your own level of moral development and maturity. This is very true and important. But I propose a further axis:
    2. Levels of intellectual understanding of ethics. While stages of consciousness refer to the degree to which you are comfortable with ambiguity, can synthesize large amounts of seemingly contradictory experiences, and are able to be emotionally stable in the face of confusion, we think that there is another axis worth exploring that has more to do with one’s intellectual model of ethics.
    3. The 4 levels are:
      1. Good vs. evil – the most common view which personifies/essentializes evil (e.g. “the devil”)
      2. Balance between good and evil – the view that most people who take psychedelics and engage in eastern meditative practices tend to arrive at. People at this level tend to think that good implies evil, and that the best we can do is to reach a state of balance and equanimity. I argue that this is a rationalization to be able to deal with extremes of suffering; the belief itself is used as an anti-depressant, which shows the intrinsic contradictoriness and motivated reasoning behind adopting this ethical worldview. You believe in the balance between good and evil in general so that you, right now, can feel better about your life. You are still, implicitly, albeit in a low-key way, trying to regulate your mood like everyone else.
      3. Gradients of wisdom – this is the view that people like Sam Harris, Ken Wilber, John Lilly, David Chapman, Buddha, etc. seem to converge on. They don’t have a deontological “if-then” ethical programming like the people at the first level. Rather, they have general heuristics and meta-heuristics for navigating complex problems. They do not claim to know “the truth” or be able to identify exactly what makes a society “better for human flourishing” but they do accept that some environments and states of consciousness are more healthy and conducive to wisdom than others. The problem with this view is that it does not give you a principled way to resolve disagreements or a way forward for designing societies from first principles.
      4. Consciousness vs. pure replicators – this view is the culmination of intellectual ethical development (although you could still be very neurotic and unenlightened otherwise) which arises when one identifies the source of everything that is systematically bad as caused by patterns that are good at making copies of themselves but that either don’t add conscious value or actively increase suffering. In this framework, it is possible for consciousness to win, which would happen if we create a full-spectrum super-sentient super-intelligent singleton that explores the entire state-space of consciousness and rationally decides what experiences to instantiate at a large scale based on the empirically revealed total order of consciousness.
  7. New Reproductive Strategies
    1. Given that we on team consciousness are in a race against Pure Replicator Hell scenarios it is important to explore ways in which we could load the dice in the favor of consciousness. One way to do so would be to increase the ways in which prosocial people are able to reproduce and pass on their pro-consciousness genes going forward. Here are a few interesting examples:
      1. Gay + Lesbian couple – for gay and lesbian couples with long time horizons we could help them have biological kids with the following scheme: Gay couple A + B and lesbian couple X + Z could combine their genes and have 4 kids A/X, A/Z, B/X, B/Z. This would create the genetic and game-theoretical incentives for this new kind of family structure to work in the long term.
      2. Genetic spellchecking – one of the most promising ways of increasing sentient welfare is to apply genetic spellchecking to embryos. This means that we would be reducing the mutational load of one’s offspring without compromising one’s genetic payload (and thus selfish genes would agree to the procedure and lead to an evolutionarily stable strategy). You wouldn’t ship code to production without testing and debugging, you wouldn’t publish a book without someone proof-reading it first, so why do we push genetic code to production without any debugging? As David Pearce says, right now every child is a genetic experiment. It’s terrible that such a high percentage of them lead to health and mental problems.
      3. A reproductive scheme in which 50% of the genes come from an “intelligently vetted gene pool” and the other 50% come from the parents’ genes. This would be very unpopular at first, but after a generation or two we would see that all of the kids who are the result of this procedure are top of the class, win athletic competitions, start getting Nobel prizes and Fields medals, etc. So soon every parent will want to do this… and indeed from a selfish gene point of view there will be no option but to do so, as it will make the difference between passing on some copies vs. none.*
      4. Dispassionate evaluation of the merits and drawbacks of one’s genes in a collective of 100 or more people where one recombines the genetic makeup of the “collective children” in order to maximize both their wellbeing and the information gained. In order to do this analysis in a dispassionate way we might need to recruit 5-meo-dmt-like states of consciousness that make you identify with consciousness rather than with your particular genes, and also MDMA-like states of mind in order to create a feeling of connection to source and universal love even if your own patterns lose out at some point… which they will after long enough, because eventually the entire gene pool would be replaced by a post-human genetic make-up.
  8. Consciousness vs. Replicators as a lens – I discussed how one can use the 4th stage of intellectual ethical development as a lens to analyze the value of different patterns and aesthetics. For example:
    1. Conservatives vs. Liberals (stick to your guns and avoid cancer vs. be adaptable but expose yourself to nasty dangers)
    2. Rap Music vs. Classical or Electronic music (social signaling vs. patternistic valence exploration)
  9. Hyperstition – Finally, I discussed the concept of hyperstition, which is a concept that refers to “ideas that make themselves real”. I explored it in the first Burning Man article. The core idea is that states of consciousness can indeed transform the history of the cosmos. In particular, high-energy states of mind like those experienced under psychedelics allow for “bigger ideas” and thus increase the upper bound of “irreducible complexity” for one’s thoughts. An example of this is coming up with further alternative reproductive strategies, which I encouraged the audience to do in order to increase the chances that team consciousness wins in the long term…

The End.


Bonus content: things I overheard virgin burners say:

  • “Intelligent people build intelligent civilizations. I now get what a society made of brilliant people would look like.”
  • “Burning Man is a magical place. It seems like it is one of the only places on Earth where the Spirit World and the Physical World intersect and play with each other.”
  • “It is not every day that you engage in a deeply transformative conversation before breakfast.”

* Thanks to Alison Streete for this idea.

Qualia Computing at Burning Man 2018: “Consciousness vs Replicators” talk

I’m thrilled to announce that I will be going to Burning Man for the second time this year. I will give a talk about Consciousness vs. Pure Replicators. The talk will be at Palenque Norte‘s consciousness-focused speaker series hosted by Camp Soft Landing.


The whole experience last year was very eye-opening, and as a result I wrote an (extremely) long essay about it. The essay introduces a wide range of entirely new concepts, including “The Goldillocks Zone of Oneness” and “Hybrid Vigor in the context of post-Darwinian ethics.” It also features a section about my conversation with God at Burning Man.

If you are attending Burning Man and would like to meet with me, I will be available for chatting and hanging out right after my talk (call it the Qualia Research Institute Office Hours at Burning Man).


Here are the details of the talk:

Andrés Gómez Emilsson-Consciousness vs Replicators

Date and Time: Wednesday, August 29th, 2018, 3 PM – 4:30 PM
Type: Class/Workshop
Located at CampCamp Soft Landing (8:15 & C (Cylon). Mid-block on C, between 8 and 8:30.)

Description:

Patterns that are good at making copies of themselves are not necessarily good from an ethical point of view. We call Pure Replicators, in the context of brains and minds, those beings that use all of their resources for the purpose of replicating. In other words, beings that replicate without regards for their own psychological wellbeing (if they are conscious) or the wellbeing of others. In as much as we believe that value is presented in the quality of experience, perhaps to be “ethical” is to be stewards and advocates for the wellbeing of as many of the “moments of experience” that exist in reality as one can. We will talk about how an “economy of information about the state-space of consciousness” can be a helpful tool in preventing pure-replicator take-over. Lastly, we will announce the existence of a novel test of consciousness that can be used to identify non-sentient artifacts or robots passing for humans within the crowd.

 

The Banality of Evil

In response to the Quora question “I feel like a lot of evil actions in the world have supporters who justify them (like Nazis). Can you come up with some convincing ways in which some of the most evil actions in the world could be justified?David Pearce writes:


Tout comprendre, c’est tout pardonner.”
(Leo Tolstoy, War and Peace)

Despite everything, I believe that people are really good at heart.
(Anne Frank)

The risk of devising justifications of the worst forms of human behaviour is there are people gullible enough to believe them. It’s not as though anti-Semitism died with the Third Reich. Even offering dispassionate causal explanation can sometimes be harmful. So devil’s advocacy is an intellectual exercise to be used sparingly.

That said, the historical record suggests that human societies don’t collectively set out to do evil. Rather, primitive human emotions get entangled with factually mistaken beliefs and ill-conceived metaphysics with ethically catastrophic consequences. Thus the Nazis seriously believed in the existence of an international Jewish conspiracy against the noble Aryan race. Hitler, so shrewd in many respects, credulously swallowed The Protocols of the Elders of Zion. And as his last testament disclosed, obliquely, Hitler believed that the gas chambers were a “more humane means” than the terrible fate befalling the German Volk. Many Nazis (HimmlerHössStangl, and maybe even Eichmann) believed that they were acting from a sense of duty – a great burden stoically borne. And such lessons can be generalised across history. If you believed, like the Inquisition, that torturing heretics was the only way to save their souls from eternal damnation in Hell, would you have the moral courage to do likewise? If you believed that the world would be destroyed by the gods unless you practised mass human sacrifice, would you participate? [No, in my case, albeit for unorthodox reasons.]

In a secular context today, there exist upstanding citizens who would like future civilisation to run “ancestor simulations”. Ancestor simulations would create inconceivably more suffering than any crime perpetrated by the worst sadist or deluded ideologue in history – at least if the computational-functional theory of consciousness assumed by their proponents is correct. If I were to pitch a message to life-lovers aimed at justifying such a monstrous project, as you request, then I guess I’d spin some yarn about how marvellous it would be to recreate past wonders and see grandpa again.
And so forth.

What about the actions of individuals, as distinct from whole societies? Not all depraved human behaviour stems from false metaphysics or confused ideology. The grosser forms of human unpleasantness often stem just from our unreflectively acting out baser appetites (cfHamiltonian spite). Consider the neuroscience of perception. Sentient beings don’t collectively perceive a shared public world. Each of us runs an egocentric world-simulation populated by zombies (sic). We each inhabit warped virtual worlds centered on a different body-image, situated within a vast reality whose existence can be theoretically inferred. Or so science says. Most people are still perceptual naïve realists. They aren’t metaphysicians, or moral philosophers, or students of the neuroscience of perception. Understandably, most people trust the evidence of their own eyes and the wisdom of their innermost feelings, over abstract theory. What “feels right” is shaped by natural selection. And what “feels right” within one’s egocentric virtual world is often callous and sometimes atrocious. Natural selection is amoral. We are all slaves to the pleasure-pain axis, however heavy the layers of disguise. Thanks to evolution, our emotions are “encephalised” in grotesque ways. Even the most ghastly behaviour can be made to seem natural –like Darwinian life itself.

Are there some forms of human behaviour so appalling that I’d find it hard to play devil’s advocate in their mitigation – even as an intellectual exercise?

Well, perhaps consider, say, the most reviled hate-figures in our society – even more reviled than murderers or terrorists. Most sexually active paedophiles don’t set out to harm children: quite the opposite, harm is typically just the tragic by-product of a sexual orientation they didn’t choose. Posthumans may reckon that all Darwinian relationships are toxic. Of course, not all monstrous human behavior stems from wellsprings as deep as sexual orientation. Thus humans aren’t obligate carnivores. Most (though not all) contemporary meat eaters, if pressed, will acknowledge in the abstract that a pig is as sentient and sapient as a prelinguistic human toddler. And no contemporary meat eaters seriously believe that their victims have committed a crime (cfAnimal trial – Wikipedia). Yet if questioned why they cause such terrible suffering to the innocent, and why they pay for a hamburger rather than a veggieburger, a meat eater will come up with perhaps the lamest justification for human depravity ever invented:

“But I like the taste!”

Such is the banality of evil.

Person-moment affecting views

by Katja Grace (source)

[Epistemic status: sloppy thoughts not informed by the literature. Hoping actual population ethicists might show up and correct me or point me to whoever has already thought about something like this better.]

Person-affecting views say that when you are summing up the value in different possible worlds, you should ignore people who only exist in one of those worlds. This is based on something like the following intuitions:

  1. World A can only be better than world B insofar as it is better for someone.
  2. World A can’t be better than world B for Alice, if Alice exists in world A but not world B.

The further-fact view says that after learning all physical facts about Alice and Alice’—such as whether Alice’ was the physical result of Alice waiting for five seconds, or is a brain upload of Alice, or is what came out of a replicating machine on Mars after Alice walked in on Earth, or remembers being Alice—there is still a further meaningful question of whether Alice and Alice’ are the same person.

I take the further-fact view to be wrong (or at least Derek Parfit does, and I think we agree the differences between Derek Parfit and I have been overstated). Thinking that the further-fact view is wrong seems to be a common position among intellectuals (e.g. 87% among philosophers).

If the further-fact view is wrong, then the what we have is a whole lot of different person-moments, with various relationships to one another, which for pragmatic reasons we like to group into clusters called ‘people’. There are different ways we could define the people, and no real answer to which definition is right. This works out pretty well in our world, but you can imagine other worlds (or futures of our world) where the clusters are much more ambiguous, and different definitions of ‘person’ make a big difference, or where the concept is not actually useful.

Person-affecting views seem to make pretty central use of the concept ‘person’. If we don’t accept the further-fact view, and do want to accept a person-affecting view, what would that mean? I can think of several options:

  1. How good different worlds are depends strongly on which definition of ‘person’ you choose (which person moments you choose to cluster together), but this is a somewhat arbitrary pragmatic choice
  2. There is some correct definition of ‘person’ for the purpose of ethics (i.e. there is some relation between person moments that makes different person moments in the future ethically relevant by virtue of having that connection to a present person moment)
  3. Different person-moments are more or less closely connected in ways, and a person-affecting view should actually have a sliding scale of importance for different person-moments

Before considering these options, I want to revisit the second reason for adopting a person-affecting view: If Alice exists in world A and not in world B, then Alice can’t be made better off by world A existing rather than world B. Whether this premise is true seems to depend on how ‘a world being better for Alice’ works. Some things we might measure would go one way, and some would go the other. For instance, we could imagine it being analogous to:

  1. Alice painting more paintings. If Alice painted three paintings in world A, and doesn’t exist in world B, I think most people would say that Alice painted more paintings in world A than in world B. And more clearly, that world A has more paintings than world B, even if we insist that a world can’t have more paintings without somebody in particular having painted more paintings. Relatedly, there are many things people do where the sentence ‘If Alice didn’t exist, she wouldn’t have X’.
  2. Alice having painted more paintings per year. If Alice painted one painting every thirty years in world A, and didn’t exist in world B, in world B the number of paintings per year is undefined, and so incomparable to ‘one per thirty years’.

Suppose that person-affecting view advocates are right, and the worth of one’s life is more like 2). You just can’t compare the worth of Alice’s life in two worlds where she only exists in one of them. Then can you compare person-moments? What if the same ‘person’ exists in two possible worlds, but consists of different person-moments?

Compare world A and world C, which both contain Alice, but in world C Alice makes different choices as a teenager, and becomes a fighter pilot instead of a computer scientist. It turns out that she is not well suited to it, and finds piloting pretty unsatisfying. If Alice_t1A is different from Alice_t1C, can we say that world A is better than world C, in virtue of Alice’s experiences? Each relevant person-moment only exists in one of the worlds, so how can they benefit?

I see several possible responses:

  1. No we can’t. We should have person-moment affecting views.
  2. Things can’t be better or worse for person-moments, only for entire people, holistically across their lives, so the question is meaningless. (Or relatedly, how good a thing is for a person is not a function of how good it is for their person-moments, and it is how good it is for the person that matters).
  3. Yes, there is some difference between people and person moments, which means that person-moments can benefit without existing in worlds that they are benefitting relative to, but people cannot.

The second possibility seems to involve accepting the second view above: that there is some correct definition of ‘person’ that is larger than a person moment, and fundamental to ethics – something like the further-fact view. This sounds kind of bad to me. And the third view doesn’t seem very tempting without some idea of an actual difference between persons and person-moments.

So maybe the person-moment affecting view looks most promising. Let us review what it would have to look like. For one thing, the only comparable person moments are the ones that are the same. And since they are the same, there is no point bringing about one instead of the other. So there is never reason to bring about a person-moment for its own benefit. Which sounds like it might really limit the things that are worth intentionally doing. Isn’t making myself happy in three seconds just bringing about a happy person moment rather than a different sad person moment?

Is everything just equally good on this view? I don’t think so, as long as you are something like a preference utilitarian: person-moments can have preferences over other person-moments. Suppose that Alice_t0A and Alice_t0C are the same, and Alice_t1A and Alice_t1C are different. And suppose that Alice_t0 wants Alice_t1 to be a computer scientist. Then world A is better than world C for Alice_t0, and so better overall. That is, person-moments can benefit from things, as long as they don’t know at the time that they have benefited.

I think an interesting  feature of this view is that all value seems to come from meddling preferences. It is never directly good that there is joy in the world for instance, it is just good because somebody wants somebody else to experience joy, and that desire was satisfied. If they had instead wished for a future person-moment to be tortured, and this was granted, then this world would apparently be just as good.

So, things that are never directly valuable in this world:

  • Joy
  • Someone getting what they want and also knowing about it
  • Anything that isn’t a meddling preference

On the upside, since person-moments often care about future person-moments within the same person, we do perhaps get back to something closer to the original person-affecting view. There is often reason to bring about or benefit a person moment for the benefit of previous person moments in the history of the same person, who for instance wants to ‘live a long and happy life’. My guess after thinking about this very briefly is that in practice it would end up looking like the ‘moderate’ person-affecting views, in which people who currently exist get more weight than people who will be brought into existence, but not infinitely more weight. People who exist now mostly want to continue existing, and to have good lives in the future, and they care less, but some, about different people in the future.

So, if you want to accept a person-affecting view and not a further-fact view, the options seem to me to be something like these:

  1. Person-moments can benefit without having an otherworldly counterpart, even though people cannot. Which is to say, only person-moments that are part of the same ‘person’ in different worlds can benefit from their existence. ‘Person’ here is either an arbitrary pragmatic definition choice, or some more fundamental ethically relevant version of the concept that we could perhaps discover.
  2. Benefits accrue to persons, not person-moments. In particular, benefits to persons are not a function of the benefits to their constituent person-moments. Where ‘person’ is again either a somewhat arbitrary choice of definition, or a more fundamental concept.
  3. A sliding scale of ethical relevance of different person-moments, based on how narrow a definition of ‘person’ unites them with any currently existing person-moments. Along with some story about why, given that you can apparently compare all of them, you are still weighting some less, on grounds that they are incomparable.
  4. Person-moment affecting views

None of these sound very good to me, but nor do person-affecting views in general, so maybe I’m the wrong audience. I had thought person-moment affecting views were almost a reductio, but a close friend says he thought they were the obvious reasonable view, so I am curious to hear others’ takes.



An interesting thing to point out here is that what Katja describes as the further-fact view is terminologically equivalent to what we here call Closed Individualism (cf. Ontological Qualia). This is the common-sense view that you start existing when you are born and stop existing when you die (which also has soul-based variants with possible pre-birth and post-death existence). This view is not very philosophically tenable because it presupposes that there is an enduring metaphysical ego distinct for every person. And yet, the vast majority of people still hold strongly to Closed Individualism. In some sense, in the article Katja tries to rescue the common-sense aspect of Closed Individualism in the context of ethics. That is, by trying to steel-man the common-sense notion that people (rather than moments of experience) are the relevant units for morality while also negating further-fact views, you provide reasons to keep using Closed Individualism as an intuition-pump in ethics (if only for pragmatic reasons). In general, I consider this kind of discussions to be a very fruitful endeavor as they approach ethics by touching upon the key parameters that matter fundamentally: identity, value, and counterfactuals.

As you may gather from pieces such as Wireheading Done Right and The Universal Plot, at Qualia Computing we tend to think the most coherent ethical system arises when we take as a premise that the relevant moral agents are “moments of experience”. Contra Person-affecting views, we don’t think it is meaningless to say that a given world is better than another one if not everyone in the first world is also in the second one. On the contrary – it really does not matter who lives in a given world. What matters is the raw subjective quality of the experiences in such worlds. If it is meaningless to ask “who is experiencing Alice’s experiences now?” once you know all the physical facts, then moral weight must be encoded in such physical facts alone. In turn, it could certainly happen then that the narrative aspect of an experience may turn out to be irrelevant for determining the intrinsic value of a given experience. People’s self-narratives may certainly have important instrumental uses, but at their core they don’t make it to the list of things that intrinsically matter (unlike, say, avoiding suffering).

A helpful philosophical move that we have found adds a lot of clarity here is to analyze the problem in terms of Open Individualism. That is, assume that we are all one consciousness and take it from there. If so, then the probability that you are a given person would be weighted by the amount of consciousness (or number of moments of experience, depending) that such person experiences throughout his or her life. You are everyone in this view, but you can only be each person one at a time from their own limited points of view. So there is a sensible way of weighting the importance of each person, and this is a function of the amount of time you spend being him or her (and normalize by the amount of consciousness that person experiences, in case that is variable across individuals).

If consciousness emerges victorious in its war against pure replicators, then it would make sense that the main theory of identity people would hold by default would be Open Individualism. After all, it is only Open Individualism that aligns individual incentives and the total wellbeing of all moments of experience throughout the universe.

That said, in principle, it could turn out that Open Individualism is not needed to maximize conscious value – that while it may be useful instrumentally to align the existing living intelligences towards a common consciousness-centric goal (e.g. eliminating suffering, building a harmonic society, etc.), in the long run we may find that ontological qualia (the aspect of our experience that we use to represent the nature of reality, including our beliefs about personal identity) has no intrinsic value. Why bother experiencing heaven in the form of a mixture of 95% bliss and 5% ‘a sense of knowing that we are all one’, if you can instead just experience 100% pure bliss?

At the ethical limit, anything that is not perfectly blissful might end up being thought of as a distraction from the cosmic telos of universal wellbeing.

The Universal Plot: Part I – Consciousness vs. Pure Replicators

“It seems plain and self-evident, yet it needs to be said: the isolated knowledge obtained by a group of specialists in a narrow field has in itself no value whatsoever, but only in its synthesis with all the rest of knowledge and only inasmuch as it really contributes in this synthesis toward answering the demand, ‘Who are we?'”

– Erwin Schrödinger in Science and Humanism (1951)

 

“Should you or not commit suicide? This is a good question. Why go on? And you only go on if the game is worth the candle. Now, the universe is been going on for an incredibly long time. Really, a satisfying theory of the universe should be one that’s worth betting on. That seems to me to be absolutely elementary common sense. If you make a theory of the universe which isn’t worth betting on… why bother? Just commit suicide. But if you want to go on playing the game, you’ve got to have an optimal theory for playing the game. Otherwise there’s no point in it.”

Alan Watts, talking about Camu’s claim that suicide is the most important question (cf. The Most Important Philosophical Question)

In this article we provide a novel framework for ethics which focuses on the perennial battle between wellbeing-oriented consciousness-centric values and valueless patterns who happen to be great at making copies of themselves (aka. Consciousness vs. Pure Replicators). This framework extends and generalizes modern accounts of ethics and intuitive wisdom, making intelligible numerous paradigms that previously lived in entirely different worlds (e.g. incongruous aesthetics and cultures). We place this worldview within a novel scale of ethical development with the following levels: (a) The Battle Between Good and Evil, (b) The Balance Between Good and Evil, (c) Gradients of Wisdom, and finally, the view that we advocate: (d) Consciousness vs. Pure Replicators. More so, we analyze each of these worldviews in light of our philosophical background assumptions and posit that (a), (b), and (c) are, at least in spirit, approximations to (d), except that they are less lucid, more confused, and liable to exploitation by pure replicators. Finally, we provide a mathematical formalization of the problem at hand, and discuss the ways in which different theories of consciousness may affect our calculations. We conclude with a few ideas for how to avoid particularly negative scenarios.

Introduction

Throughout human history, the big picture account of the nature, purpose, and limits of reality has evolved dramatically. All religions, ideologies, scientific paradigms, and even aesthetics have background philosophical assumptions that inform their worldviews. One’s answers to the questions “what exists?” and “what is good?” determine the way in which one evaluates the merit of beings, ideas, states of mind, algorithms, and abstract patterns.

Kuhn’s claim that different scientific paradigms are mutually unintelligible (e.g. consciousness realism vs. reductive eliminativism) can be extended to worldviews in a more general sense. It is unlikely that we’ll be able to convey the Consciousness vs. Pure Replicators paradigm by justifying each of the assumptions used to arrive to it one by one starting from current ways of thinking about reality. This is because these background assumptions support each other and are, individually, not derivable from current worldviews. They need to appear together as a unit to hang together tight. Hence, we now make the jump and show you, without further due, all of the background assumptions we need:

  1. Consciousness Realism
  2. Qualia Formalism
  3. Valence Structuralism
  4. The Pleasure Principle (and its corollary The Tyranny of the Intentional Object)
  5. Physicalism (in the causal sense)
  6. Open Individualism (also compatible with Empty Individualism)
  7. Universal Darwinism

These assumptions have been discussed in previous articles. In the meantime, here is a brief description: (1) is the claim that consciousness is an element of reality rather than simply the improper reification of illusory phenomena, such that your conscious experience right now is as much a factual and determinate aspect of reality as, say, the rest mass of an electron. In turn, (2) qualia formalism is the notion that consciousness is in principle quantifiable. Assumption (3) states that valence (i.e. the pleasure/pain axis, how good an experience feels) depends of the structure of such experience (more formally, on the properties of the mathematical object isomorphic to its phenomenology).

(4) is the assumption that people’s behavior is motivated by the pleasure-pain axis even when they think that’s not the case. For instance, people may explicitly represent the reason for doing things in terms of concrete facts about the circumstance, and the pleasure principle does not deny that such reasons are important. Rather, it merely says that such reasons are motivating because one expects/anticipates less negative valence or more positive valence. The Tyranny of the Intentional Object describes the fact that we attribute changes in our valence to external events and objects, and believe that such events and objects are intrinsically good (e.g. we think “icecream is great” rather than “I feel good when I eat icecream”).

Physicalism (5) in this context refers to the notion that the equations of physics fully describe the causal behavior of reality. In other words, the universe behaves according to physical laws and even consciousness has to abide by this fact.

Open Individualism (6) is the claim that we are all one consciousness, in some sense. Even though it sounds crazy at first, there are rigorous philosophical arguments in favor of this view. Whether this is true or not is, for the purpose of this article, less relevant than the fact that we can experience it as true, which happens to have both practical and ethical implications for how society might evolve.

Finally, (7) Universal Darwinism refers to the claim that natural selection works at every level of organization. The explanatory power of evolution and fitness landscapes generated by selection pressures is not confined to the realm of biology. Rather, it is applicable all the way from the quantum foam to, possibly, an ecosystem of universes.

The power of a given worldview is not only its capacity to explain our observations about the inanimate world and the quality of our experience, but also in its capacity to explain *in its own terms* the reasons for why other worldviews are popular as well. In what follows we will utilize these background assumptions to evaluate other worldviews.

 

The Four Worldviews About Ethics

The following four stages describe a plausible progression of thoughts about ethics and the question “what is valuable?” as one learns more about the universe and philosophy. Despite the similarity of the first three levels to the levels of other scales of moral development (e.g. this, this, this, etc.), we believe that the fourth level is novel, understudied, and very, very important.

1. The “Battle Between Good and Evil” Worldview

“Every distinction wants to become the distinction between good and evil.” – Michael Vassar (source)

Common-sensical notions of essential good and evil are pre-scientific. For reasons too complicated to elaborate on for the time being, the human mind is capable of evoking an agentive sense of ultimate goodness (and of ultimate evil).

maxresdefault

Good vs. Evil? God vs. the Devil?

Children are often taught that there are good people and bad people. That evil beings exist objectively, and that it is righteous to punish them and see them with scorn. On this level people reify anti-social behaviors as sins.

Essentializing good and evil, and tying it up to entities seems to be an early developmental stage of people’s conception of ethics, and many people end up perpetually stuck in here. Several religions (specially the Abrahamic ones) are often practiced in such a way so as to reinforce this worldview. That said, many ideologies take advantage of the fact that a large part of the population is at this level to recruit adherents by redefining “what good and bad is” according to the needs of such ideologies. As a psychological attitude (rather than as a theory of the universe), reactionary and fanatical social movements often rely implicitly on this way of seeing the world, where there are bad people (jews, traitors, infidels, over-eaters, etc.) who are seen as corrupting the soul of society and who deserve to have their fundamental badness exposed and exorcised with punishment in front of everyone else.

15d8a1999197da27374e911b1cba769a--satan-god-is-good

Traditional notions of God vs. the Devil can be interpreted as the personification of positive and negative valence

Implicitly, this view tends to gain psychological strength from the background assumptions of Closed Individualism (which allows you to imagine that people can be essentially bad). Likewise, this view tends to be naïve about the importance of valence in ethics. Good feelings are often interpreted as the result of being aligned with fundamental goodness, rather than as positive states of consciousness that happen to be triggered by a mix of innate and programmable things (including cultural identifications). More so, good feelings that don’t come in response to the preconceived universal order are seen as demonic and aberrant.

From our point of view (the 7 background assumptions above) we interpret this particular worldview as something that we might be biologically predisposed to buy into. Believing in the battle between good and evil was probably evolutionarily adaptive in our ancestral environment, and might reduce many frictional costs that arise from having a more subtle view of reality (e.g. “The cheaper people are to model, the larger the groups that can be modeled well enough to cooperate with them.” – Michale Vassar). Thus, there are often pragmatic reasons to adopt this view, specially when the social environment does not have enough resources to sustain a more sophisticated worldview. Additionally, at an individual level, creating strong boundaries around what is or not permissible can be helpful when one has low levels of impulse control (though it may come at the cost of reduced creativity).

On this level, explicit wireheading (whether done right or not) is perceived as either sinful (defying God’s punishment) or as a sort of treason (disengaging from the world). Whether one feels good or not should be left to the whims of the higher order. On the flipside, based on the pleasure principle it is possible to interpret the desire to be righteous as being motivated by high valence states, and reinforced by social approval, all the while the tyranny of the intentional object cloaks this dynamic.

It’s worth noting that cultural conservativism, low levels of the psychological constructs of Openness to Experience and Tolerance of Ambiguity , and high levels of Need for Closure, all predict getting stuck in this worldview for one’s entire life.

2. The “Balance Between Good and Evil” Worldview

TVTropes has a great summary of the sorts of narratives that express this particular worldview and I highly recommend reading that article to gain insight into the moral attitudes compatible with this view. For example, here are some reasons why Good cannot or should not win:

Good winning includes: the universe becoming boring, society stagnating or collapsing from within in the absence of something to struggle against or giving people a chance to show real nobility and virtue by risking their lives to defend each other. Other times, it’s enforced by depicting ultimate good as repressive (often Lawful Stupid), or by declaring concepts such as free will or ambition as evil. In other words “too much of a good thing”.

Balance Between Good and Evil by tvtropes

Now, the stated reasons why people might buy into this view are rarely their true reasons. Deep down, the Balance Between Good and Evil is adopted because: people want to differentiate themselves from those who believe in (1) to signal intellectual sophistication, they experience learned helplessness after trying to defeat evil without success (often in the form of resilient personal failings or societal flaws), they find the view compelling at an intuitive emotional level (i.e. they have internalized the hedonic treadmill and project it onto the rest of reality).

In all of these cases, though, there is something somewhat paradoxical about holding this view. And that is that people report that coming to terms with the fact that not everything can be good is itself a cause of relief, self-acceptance, and happiness. In other words, holding this belief is often mood-enhancing. One can also confirm the fact that this view is emotionally load-bearing by observing the psychological reaction that such people have to, for example, bringing up the Hedonistic Imperative (which asserts that eliminating suffering without sacrificing anything of value is scientifically possible), indefinite life extension, or the prospect of super-intelligence. Rarely are people at this level intellectually curious about these ideas, and they come up with excuses to avoid looking at the evidence, however compelling it may be.

For example, some people are lucky enough to be born with a predisposition to being hyperthymic (which, contrary to preconceptions, does the opposite of making you a couch potato). People’s hedonic set-point is at least partly genetically determined, and simply avoiding some variants of the SCN9A gene with preimplantation genetic diagnosis would greatly reduce the number of people who needlessly suffer from chronic pain.

But this is not seen with curious eyes by people who hold this or the previous worldview. Why? Partly this is because it would be painful to admit that both oneself and others are stuck in a local maxima of wellbeing and that examining alternatives might yield very positive outcomes (i.e. omission bias). But at its core, this willful ignorance can be explained as a consequence of the fact that people at this level get a lot of positive valence from interpreting present and past suffering in such a way that it becomes tied to their core identity. Pride in having overcome their past sufferings, and personal attachment to their current struggles and anxieties binds them to this worldview.

If it wasn’t clear from the previous paragraph, this worldview often requires a special sort of chronic lack of self-insight. It ultimately relies on a psychological trick. One never sees people who hold this view voluntarily breaking their legs, taking poison, or burning their assets to increase the goodness elsewhere as an act of altruism. Instead, one uses this worldview as a mood-booster, and in practice, it is also susceptible to the same sort of fanaticism as the first one (although somewhat less so). “There can be no light without the dark. And so it is with magic. Myself, I always try to live within the light.” – Horace Slughorn.

315eab27545ea96c67953c54358fe600Additionally, this view helps people rationalize the negative aspects of one’s community and culture. For example, it not uncommon for people to say that buying factory farmed meat is acceptable on the grounds that “some things have to die/suffer for others to live/enjoy life.” Balance Between Good and Evil is a close friend of status quo bias.

Hinduism, Daoism, and quite a few interpretations of Buddhism work best within this framework. Getting closer to God and ultimate reality is not done by abolishing evil, but by embracing the unity of all and fostering a healthy balance between health and sickness.

It’s also worth noting that the balance between good and evil tends to be recursively applied, so that one is not able to “re-define our utility function from ‘optimizing the good’ to optimizing ‘the balance of good and evil’ with a hard-headed evidence-based consequentialist approach.” Indeed, trying to do this is then perceived as yet another incarnation of good (or evil) which needs to also be balanced with its opposite (willful ignorance and fuzzy thinking). One comes to the conclusion that it is the fuzzy thinking itself that people at this level are after: to blur reality just enough to make it seem good, and to feel like one is not responsible for the suffering in the world (specially by inaction and lack of thinking clearly about how one could help). “Reality is only a Rorschach ink-blot, you know” – Alan Watts. So this becomes a justification for thinking less than one really has to about the suffering in the world. Then again, it’s hard to blame people for trying to keep the collective standards of rigor lax, given the high proportion of fanatics who adhere to the “battle between good and evil” worldview, and who will jump the gun to demonize anyone who is slacking off and not stressed out all the time, constantly worrying about the question “could I do more?”

(Note: if one is actually trying to improve the world as much as possible, being stressed out about it all the time is not the right policy).

3. The “Gradients of Wisdom” Worldview

David Chapman’s HTML book Meaningness might describe both of the previous worldviews as variants of eternalism. In the context of his work, eternalism refers to the notion that there is an absolute order and meaning to existence. When applied to codes of conduct, this turns into “ethical eternalism”, which he defines as: “the stance that there is a fixed ethical code according to which we should live. The eternal ordering principle is usually seen as the source of the code.” Chapman eloquently argues that eternalism has many side effects, including: deliberate stupidity, attachment to abusive dynamics, constant disappointment and self-punishment, and so on. By realizing that, in some sense, no one knows what the hell is going on (and those who do are just pretending) one takes the first step towards the “Gradients of Wisdom” worldview.

At this level people realize that there is no evil essence. Some might talk about this in terms of there “not being good or bad people”, but rather just degrees of impulse control, knowledge about the world, beliefs about reality, emotional stability, and so on. A villain’s soul is not connected to some kind of evil reality. Rather, his or her actions can be explained by the causes and conditions that led to his or her psychological make-up.

Sam Harris’ ideas as expressed in The Moral Landscape evoke this stage very clearly. Sam explains that just as health is a fuzzy but important concept, so is psychological wellbeing, and that for such a reason we can objectively assess cultures as more or less in agreement with human flourishing. the-science-of-morality-7-728

Indeed, many people who are at this level do believe in valence structuralism, where they recognize that there are states of consciousness that are inherently better in some intrinsic subjective value sense than others.

However, there is usually no principled framework to assess whether a certain future is indeed optimal or not. There is little hard-headed discussion of population ethics for fear of sounding unwise or insensitive. And when push comes to shove, they lack good arguments to decisively rule out why particular situations might be bad. In other words, there is room for improvement, and such improvement might eventually come from more rigor and bullet-bitting.  In particular, a more direct examination of the implications of: Open Individualism, the Tyranny of the Intentional Object, and Universal Darwinism can allow someone on this level to make a breakthrough. Here is where we come to:

4. The “Consciousness vs. Pure Replicators” Worldview

In Wireheading Done Right we introduced the concept of a pure replicator:

I will define a pure replicator, in the context of agents and minds, to be an intelligence that is indifferent towards the valence of its conscious states and those of others. A pure replicator invests all of its energy and resources into surviving and reproducing, even at the cost of continuous suffering to themselves or others. Its main evolutionary advantage is that it does not need to spend any resources making the world a better place.

Presumably our genes are pure replicators. But we, as sentient minds who recognize the intrinsic value (both positive and negative) of conscious experiences, are not pure replicators. Thanks to a myriad of fascinating dynamics, it so happened that making minds who love, appreciate, think creatively, and philosophize was a side effect of the process of refining the selfishness of our genes. We must not take for granted that we are more than pure replicators ourselves, and that we care both about our wellbeing and the wellbeing of others. The problem now is that the particular selection pressures that led to this may not be present in the future. After all, digital and genetic technologies are drastically changing the fitness landscape for patterns that are good at making copies of themselves.

In an optimistic scenario, future selection pressures will make us all naturally gravitate towards super-happiness. This is what David Pearce posits in his essay “The Biointelligence Explosion”:

As the reproductive revolution of “designer babies” gathers pace, prospective parents will pre-select alleles and allelic combinations for a new child in anticipation of their behavioural effects – a novel kind of selection pressure to replace the “blind” genetic roulette of natural selection. In time, routine embryo screening via preimplantation genetic diagnosis will be complemented by gene therapy, genetic enhancement and then true designer zygotes. In consequence, life on Earth will also become progressively happier as the hedonic treadmill is recalibrated. In the new reproductive era, hedonic set-points and intelligence alike will be ratcheted upwards in virtue of selection pressure. For what parent-to-be wants to give birth to a low-status depressive “loser”? Future parents can enjoy raising a normal transhuman supergenius who grows up to be faster than Usain Bolt, more beautiful than Marilyn Monroe, more saintly than Nelson Mandela, more creative than Shakespeare – and smarter than Einstein.

– David Pearce in The Biointelligence Explosion

In a pessimistic scenario, the selection pressures lead to the opposite direction, where negative experiences are the only states of consciousness that happen to be evolutionarily adaptive, and so they become universally used.

There is a number of thinkers and groups who can be squarely placed on this level, and relative to the general population, they are extremely rare (see: The Future of Human Evolution,  A Few Dystopic Future Scenarios,  Book Review: Age of EM, Nick Land’s Gnon, Spreading Happiness to the Stars Seems Little Harder than Just Spreading, etc.). See also**. What is much needed now, is formalizing the situation and working out what we could do about it. But first, some thoughts about the current state of affairs.

There is at least some encouraging facts that suggest it is not too late to prevent a pure replicator takeover. There are memes, states of consciousness, and resources that can be used in order to steer evolution in a positive directions. In particular, as of 2017:

  1. A very big proportion of the economy is dedicated to trading positive experiences for money, rather than just survival or power tools. Thus an economy of information about states of consciousness is still feasible.
  2. There is a large fraction of the population who is altruistic and would be willing to cooperate with the rest of the world to avoid catastrophic scenarios.
  3. Happy people are more motivated, productive, engaged, and ultimately, economically useful (see hyperthimic temperament).
  4. Many people have explored Open Individualism and are interested (or at least curious) about the idea that we are all one.
  5. A lot of people are fascinated by psychedelics and the non-ordinary states of consciousness that they induce.
  6. MDMA-like consciousness is both very positive in terms of its valence, but also, amazingly, extremely pro-social, and future sustainable versions of it could be recruited to stabilize societies where the highest value is the collective wellbeing.

It is important to not underestimate the power of the facts laid out above. If we get our act together and create a Manhattan Project of Consciousness we might be able to find sustainable, reliable, and powerful methods that stabilize a hyper-motivated, smart, super-happy and super-prosocial state of consciousness in a large fraction of the population. In the future, we may all by default identify with consciousness itself rather than with our bodies (or our genes), and be intrinsically (and rationally) motivated to collaborate with everyone else to create as much happiness as possible as well as to eradicate suffering with technology. And if we are smart enough, we might also be able to solidify this state of affairs, or at least shield it against pure replicator takeovers.

The beginnings of that kind of society may already be underway. Consider for example the contrast between Burning Man and Las Vegas. Burning Man is a place that works as a playground for exploring post-Darwinean social dynamics, in which people help each other overcome addictions and affirm their commitment to helping all of humanity. Las Vegas, on the other hand, might be described as a place that is filled to the top with pure replicators in the forms of memes, addictions, and denial. The present world has the potential for both kind of environments, and we do not yet know which one will outlive the other in the long run.

Formalizing the Problem

We want to specify the problem in a way that will make it mathematically intelligible. In brief, in this section we focus on specifying what it means to be a pure replicator in formal terms. Per the definition, we know that pure replicators will use resources as efficiently as possible to make copies of themselves, and will not care about the negative consequences of their actions. And in the context of using brains, computers, and other systems whose states might have moral significance (i.e. they can suffer), they will simply care about the overall utility of such systems for whatever purpose they may require. Such utility will be a function of both the accuracy with which the system performs it’s task, as well as its overall efficiency in terms of resources like time, space, and energy.

Simply phrased, we want to be able to answer the question: Given a certain set of constraints such as energy, matter, and physical conditions (temperature, radiation, etc.), what is the amount of pleasure and pain involved in the most efficient implementation of a given predefined input-output mapping?

system_specifications

The image above represents the relevant components of a system that might be used for some purpose by an intelligence. We have the inputs, the outputs, the constraints (such as temperature, materials, etc.) and the efficiency metrics. Let’s unpack this. In the general case, an intelligence will try to find a system with the appropriate trade-off between efficiency and accuracy. We can wrap up this as an “efficiency metric function”, e(o|i, s, c) which encodes the following meaning: “e(o|i, s, c) = the efficiency with which a given output is generated given the input, the system being used, and the physical constraints in place.”

basic_system

Now, we introduce the notion of the “valence for the system given a particular input” (i.e. the valence for the system’s state in response to such an input). Let’s call this v(s|i). It is worth pointing out that whether valence can be computed, and whether it is even a meaningfully objective property of a system is highly controversial (e.g. “Measuring Happiness and Suffering“). Our particular take (at QRI) is that valence is a mathematical property that can be decoded from the mathematical object whose properties are isomorphic to a system’s phenomenology (see: Principia Qualia: Part II – Valence, and also Quantifying Bliss). If so, then there is a matter of fact about just how good/bad an experience is. For the time being we will assume that valence is indeed quantifiable, given that we are working under the premise of valence structuralism (as stated in our list of assumptions). We thus define the overall utility for a given output as U(e(o|i, s, c), v(s|i)), where the valence of the system may or may not be taken into account. In turn, an intelligence is said to be altruistic if it cares about the valence of the system in addition to its efficiency, so that it’s utility function penalizes negative valence (and rewards positive valence).

valence_altruism

Now, the intelligence (altruistic or not) utilizing the system will also have to take into account the overall range of inputs the system will be used to process in order to determine how valuable the system is overall. For this reason, we define the expected value of the system as the utility of each input multiplied by its probability.

input_probabilities

(Note: a more complete formalization would also weight in the importance of each input-output transformation, in addition to their frequency). Moving on, we can now define the overall expected utility for the system given the distribution of inputs it’s used for, its valence, its efficiency metrics, and its constraints as E[U(s|v, e, c, P(I))]:

chosen_system

The last equation shows that the intelligence would choose the system that maximizes E[U(s|v, e, c, P(I))].

Pure replicators will be better at surviving as long as the chances of reproducing do not depend on their altruism. If altruism does not reduce such reproductive fitness, then:

Given two intelligences that are competing for existence and/or resources to make copies of themselves and fight against other intelligences, there is going to be a strong incentive to choose a system that maximizes the efficiency metrics regardless of the valence of the system.

In the long run, then, we’d expect to see only non-altruistic intelligences (i.e. intelligences with utility functions that are indifferent to the valence of the systems it uses to process information). In other words, as evolution pushes intelligences to optimize the efficiency metrics of the systems they employ, it also pushes them to stop caring about the wellbeing of such systems. In other words, evolution pushes intelligences to become pure replicators in the long run.

Hence we should ask: How can altruism increase the chances of reproduction? A possibility would be for the environment to reward entities that are altruistic. Unfortunately, in the long run we might see that environments that reward altruistic entities produce less efficient entities than environments that don’t. If there are two very similar environments, one which rewards altruism and one which doesn’t, the efficiency of the entities in the latter might become so much higher than in the former that they become able to takeover and destroy whatever mechanism is implementing such reward for altruism in the former. Thus, we suggest to find environments in which rewarding altruism is baked into their very nature, such that similar environments without such reward either don’t exist or are too unstable to exist for the amount of time it takes to evolve non-altruistic entities. This and other similar approaches will be explored further in Part II.

Behaviorism, Functionalism, Non-Materialist Physicalism

A key insight is that the formalization presented above is agnostic about one’s theory of consciousness. We are simply assuming that it’s possible to compute the valence of the system in terms of its state. How one goes about computing such valence, though, will depend on how one maps physical systems to experiences. Getting into the weeds of the countless theories of consciousness out there would not be very productive at this stage, but there is still value in defining the rough outline of kinds of theories of consciousness. In particular, we categorize (physicalist) theories of consciousness in terms of the level of abstraction they identify as the place in which to look for consciousness.

Behaviorism and similar accounts simply associate consciousness to input-output mappings, which can be described, in Marr’s terms, as the computational level of abstraction. In this case, v(s|i) would not depend on the details of the system as much as in what it does from a third person point of view. Behaviorists don’t care what’s in the Chinese Room; all they care about is if the Chinese Room can scribble “I’m in pain” as an output. How we can formalize a mathematical equation to infer whether a system is suffering from a behaviorist point of view is beyond me, but maybe someone might want to give it a shot. As a side note, behaviorists historically were not very concerned about pain or pleasure, and there cause to believe that behaviorism itself might be anti-depressant for people for whom introspection results in more pain than pleasure.

Functionalism (along with computational theories of mind) defines consciousness as the sum-total of the functional properties of systems. In turn, this means that consciousness arises at the algorithmic level of abstraction. Contrary to common misconception, functionalists do care about how the Chinese Room is implemented: contra behaviorists, they do not usually agree that a Chinese Room implemented with a look-up table is conscious.*

As such v(s|i) will depend on the algorithms that the system is implementing. Thus, as an intermediary step, one would need a function that takes the system as an input and returns the algorithms that the system is implementing as an output, A(s). Only once we have A(s) we would then be able to infer the valence of the system. Which algorithms, and for what reason, are in fact hedonically-charged has yet to be clarified. Committed functionalists often associate reinforcement learning with pleasure and pain, and one could imagine that as philosophy of mind gets more rigorous and takes into account more advancements in neuroscience and AI, we will see more hypothesis being made about what kinds of algorithms result in phenomenal pain (and pleasure). There are many (still fuzzy) problems to be solved for this account to work even in principle. Indeed, there is a reason to believe that the question “what algorithms is this system performing?” has no definite answer, and it surely isn’t frame-invariant in the same way that a physical state might be. The fact that algorithms do not carve nature at its joints would imply that consciousness is not really a well-defined element of reality either. But rather than this working as a reductio-ad-absurdum of functionalism, many of its proponents have instead turned around to conclude that consciousness itself is not a natural kind. This does represent an important challenge in order to define the valence of the system, and makes the problem of detecting and avoiding pure replicators extra challenging. Admirably so, this is not stopping some from trying anyway.

We also should note that there are further problems with functionalism in general, including the fact that qualia, the binding problem, and the causal role of consciousness seem underivable from its premises. For a detailed discussion about this, read this article.

Finally, Non-Materialist Physicalism locates consciousness at the implementation level of abstraction. This general account of consciousness refers to the notion that the intrinsic nature of the physical is qualia. There are many related views that for the purpose of this article should be good enough approximations: panpsychism, panexperientialism, neutral monism, Russellian monism, etc. Basically, this view takes seriously both the equations of physics and the idea that what they describe is the behavior of qualia. A big advantage of this view is that there is a matter-of-fact about what a system is composed of. Indeed, both in relativity and quantum mechanics, the underlying nature of a system is frame-invariant, such that its fundamental (intrinsic and causal) properties do not depend on one’s frame of reference. In order to obtain v(s|i) we will need to obtain this frame-invariant description of what the system is in a given state. Thus, we need a function that takes as input physical measurements of the system and returns the best possible approximation to what is actually going on under the hood, Ph(s). And only with this function Ph(s) we would be ready to compute the valence of the system. Now, in practice we might not need a plank-length description of the system, since the mathematical property that describes it’s valence might turn out to be well-approximated with high-level features of it.

The main problem with Non-Materialist Physicalism comes when one considers systems that have similar efficiency metrics, are performing the same algorithms, and look the same in all of the relevant respects from a third-person point, and yet do not have the same experience. In brief: if physical rather than functional aspects of systems map to conscious experiences, it seems likely that we could find two systems that do the same (input-output mapping), do it in the same way (algorithms), and yet one is conscious and the other isn’t.

This kind of scenario is what has pushed many to conclude that functionalism is the only viable alternative, since at this point consciousness would seem epiphenomenal (e.g. Zombies Redacted). And indeed, if this was the case, it would seem to be a mere matter of chance that our brains are implemented with the right stuff to be conscious, since the nature of such stuff is not essential to the algorithms that actually end up processing the information. You cannot speak to stuff, but you can speak to an algorithm. So how do we even know we have the right stuff to be conscious?

The way to respond to this very valid criticism is for Non-Materialist Physicalism to postulate that bound states of consciousness have computational properties. In brief, epiphenomenalism cannot be true. But this does not rule out Non-Materialist Physicalism for the simple reason that the quality of states of consciousness might be involved in processing information. Enter…

The Computational Properties of Consciousness

Let’s leave behaviorism behind for the time being. In what ways do functionalism and non-materialist physicalism differ in the context of information processing? In the former, consciousness is nothing other than certain kinds of information processing, whereas in the latter conscious states can be used for information processing. An example of this falls out of taking David Pearce’s theory of consciousness seriously. In his account, the phenomenal binding problem (i.e. “if we are made of atoms, how come our experience contains many pieces of information at once?”, see: The Combination Problem for Panpsychism) is solved via quantum coherence. Thus, a given moment of consciousness is a definite physical system that works as a unit. Conscious states are ontologically unitary, and not merely functionally unitary.

If this is the case, there would be a good reason for evolution to recruit conscious states to process information. Simply put, given a set of constraints, using quantum coherence might be the most efficient way to solve some computational problems. Thus, evolution might have stumbled upon a computational jackpot by creating neurons whose (extremely) fleeting quantum coherence could be used to solve constraint satisfaction problems in ways that would be more energetically expensive to do otherwise. In turn, over many millions of years, brains got really good at using consciousness in order to efficiently process information. It is thus not an accident that we are conscious, that our conscious experiences are unitary, that our world-simulations use a wide range of qualia varieties, and so on. All of these seemingly random, seemingly epiphenomenal, aspects of our existence happen to be computationally advantageous. Just as using quantum computing for factorizing prime numbers, or for solving problems amenable to annealing might give quantum computers a computational edge over their non-quantum counterparts, so is using bound conscious experiences helpful to outcompete non-sentient animals.

Of course, there is yet no evidence of macroscopic decoherence and the brain is too hot anyway, so on the face of it Pearce’s theory seems exceedingly unlikely. But its explanatory power should not be dismissed out of hand, and the fact that it makes empirically testable predictions is noteworthy (how often do consciousness theorists make precise predictions to falsify their theories?).

Whether it is via quantum coherence, entanglement, invariants of the gauge field, or any other deep physical property of reality, non-materialist physicalism can avert the spectre of epiphenomenalism by postulating that the relevant properties of matter that make us conscious are precisely those that give our brains a computational edge (relative to what evolution was able to find in the vicinity of the fitness landscape explored in our history).

Will Pure Replicators Use Valence Gradients at All?

Whether we work under the assumption of functionalism or non-materialist physicalism, we already know that our genes found happiness and suffering to be evolutionary advantageous. So we know that there is at least a set of constraints, efficiency metrics, and input-output mappings that make both phenomenal pleasure and pain very good algorithms (functionalism) or physical implementations (non-materialist physicalism). But will the parameters necessitated by replicators in the long-term future have these properties? Remember that evolution was only able to explore a restricted state-space of possible brain implementations delimited by the pre-existing gene pool (and the behavioral requirements provided by the environment). So, in one extreme case, it may be the case that a fully optimized brain simply does not need consciousness to solve problems. And in another extreme, it may turn out that consciousness is extraordinarily more powerful when used in an optimal way. Would this be good or bad?

What’s the best case scenario? Well, the absolute best possible case is a case so optimistic and incredibly lucky that if it turned out to be true, it would probably make me believe in a benevolent God (or Simulation). This is the case where it turns out that only positive valence gradients are computationally superior to every other alternative given a set of constraints, input-output mappings, and arbitrary efficiency functions. In this case, the most powerful pure replicators, despite their lack of altruism, will nonetheless be pumping out massive amounts of systems that produce unspeakable levels of bliss. It’s as if the very nature of this universe is blissful… we simply happen to suffer because we are stuck in a tiny wrinkle at the foothills of the optimization process of evolution.

In the extreme opposite case, it turns out that only negative valence gradients offer strict computational benefits under heavy optimization. This would be Hell. Or at least, it would tend towards Hell in the long run. If this happens to be the universe we live in, let’s all agree to either conspire to prevent evolution from moving on, or figure out the way to turn it off. In the long term, we’d expect every being alive (or AI, upload, etc.) to be a zombie or a piece of dolorium. Not a fun idea.

In practice, it’s much more likely that both positive and negative valence gradients will be of some use in some contexts. Figuring out exactly which contexts these are might be both extremely important, and also extremely dangerous. In particular, finding out in advance which computational tasks make positive valence gradients a superior alternative to other methods of doing the relevant computations would inform us about the sorts of cultures, societies, religions, and technologies that we should be promoting in order to give this a push in the right direction (and hopefully out-run the environments that would make negative valence gradients adaptive).

Unless we create a Singleton early on, it’s likely that by default all future entities in the long-term future will be non-altruistic pure replicators. But it is also possible that there are multiple attractors (i.e. evolutionarily stable ecosystems) in which different computational properties of consciousness are adaptive. Thus the case for pushing our evolutionary history in the right direction right now before we give up.

 Coming Next: The Hierarchy of Cooperators

Now that we covered the four worldviews, formalized what it means to be a pure replicator, and analyzed the possible future outcomes based on the computational properties of consciousness (and of valence gradients in particular), we are ready to face the game of reality in its own terms.

Team Consciousness, we need to to get our act together. We need a systematic worldview, availability of states of consciousness, set of beliefs and practices to help us prevent pure replicator takeovers.

But we cannot do this as long as we are in the dark about the sorts of entities, both consciousness-focused and pure replicators, who are likely to arise in the future in response to the selection pressures that cultural and technological change are likely to produce. In Part II of The Universal Plot we will address this and more. Stay tuned…

 



* Rather, they usually claim that, given that a Chinese Room is implemented with physical material from this universe and subject to the typical constraints of this world, it is extremely unlikely that a universe-sized look-up table would be producing the output. Hence, the algorithms that are producing the output are probably highly complex and using information processing with human-like linguistic representations, which means that, by all means, the Chinese Room it very likely understanding what it is outputting.


** Related Work:

Here is a list of literature that points in the direction of Consciousness vs. Pure Replicators. There are countless more worthwhile references, but I think that these ones are about the best:

The Biointelligence Explosion (David Pearce), Meditations on Moloch (Scott Alexander), What is a Singleton? (Nick Bostrom), Coherent Extrapolated Volition (Eliezer Yudkowsky), Simulations of God (John Lilly), Meaningness (David Chapman), The Selfish Gene (Richard Dawkins), Darwin’s Dangerous Idea (Daniel Dennett), Prometheus Rising (R. A. Wilson).

Additionally, here are some further references that address important aspects of this worlview, although they are not explicitly trying to arrive at a big picture view of the whole thing:

Neurons Gone Wild (Kevin Simler), The Age of EM (Robin Hanson), The Mating Mind (Geoffrey Miller), Joyous Cosmology (Alan Watts), The Ego Tunnel (Thomas Metzinger), The Orthogonality Thesis (Stuart Armstrong)

 

Traps of the God Realm

From Opening the Heart of Compassion by Martin Lowenthal and Lar Short (pages 132-136).

Seeking Oneness

In this realm we want to be “one with the universe.” We are trying to return to a time when we felt no separation, when the world of our experience seemed to be the only world. We want to recover the experience and comfort of the womb. In the universe of the womb, everything was ours without qualification and was designed to support our existence and growth. Now we want the cosmos to be our womb, as if it were designed specifically for our benefit.

We want satisfaction to flow more easily, naturally and automatically. This seems less likely when we are enmeshed in the everyday affairs of the world. Therefore, we withdraw to the familiar world of what is ours, of what we can control, and of our domain of influence. We may even withdraw to a domain in the mind. Everything seems to come so much easier in the realm of thought, once we have achieved some modest control over our minds. Insulating ourselves from the troubles of others and of life, we get further seduced by the seeming limitlessness of this mental world. 

In this process of trance formation, we try to make every sound musical, every image a work of art, and every feeling pleasant. Blocking out all sources of irritation, we retreat to a self-proclaimed “higher” plane of being. We cultivate the “higher qualities of life,” not settling for a “mundane” life.

Masquerade of Higher Consciousness

The danger for those of us on a spiritual path is that the practices and the teachings can be enlisted to serve the realm rather than to dissolve our fixations and open us to truth. We discover that we can go beyond sensual pleasure and material beauty to refined states of consciousness. We achieve purely mental pleasures of increasing subtlety and learn how to maintain them for extended periods. We think we can maintain our new vanity and even expand it to include the entire cosmos, thus vanquishing change, old age, and death. Chogyam Trungpa Rinpoche called this process “spiritual materialism.”

For example, we use a sense of spaciousness to expand our consciousness by imposing our preconception of limitlessness on the cosmos. We see everything that we have created and “it is good.” Our vanity in the god realm elevates our self-image to the level of the divine–we feel capable of comprehending the universe and the nature of reality.

We move beyond our contemplation of limitless space, expanding our consciousness to include the very forces that create vast space. As the creator of vast space, we imagine that we have no boundaries, no limits, and no position. Our mind can now include everything. We find that we do not have concepts for such images and possibilities, so we think that the Divine or Essence must be not any particular thing we can conceive of, must be empty of conceptual characteristics.

Thus our vain consciousness, as the Divine, conceives that it has no particular location, is not anything in particular, and is itself beyond imagination. We arrive at the conclusion that even this attempt to comprehend emptiness is itself a concept, and that emptiness is devoid of inherent meaning. We shift our attention to the idea of being not not any particular thing. We then come to the glorious position that nothing can be truly stated, that nothing has inherent value. This mental understanding becomes our ultimate vanity. We take pride in it, identify as someone who “knows”, and adopt a posture in the world as someone who has journeyed into the ultimate nature of the unknown.

In this way we create more and more chains that bind us and limit our growth as we move ever inward. When we think we are becoming one with the universe, we are only achieving greater oneness with our own self-image. Instead of illuminating our ignorance, we expand its domain. We become ever more disconnected from others, from communication and true sharing, and from compassion. We subtly bind ourselves ever more tightly, even to the point of suffocation, under the guise of freedom in spaciousness.

Spiritual Masquerades of Teachers and Devoted Students

As we acquire some understanding and feel expansive, we may think we are God’s special gift to humanity, here to teach the truth. Although we may not acknowledge that we have something to prove, at some level we are trying to prove how supremely unique and important we are. Our spiritual life-style is our expression of that uniqueness and significance.

Spiritual teachers run a great danger of falling into the traps of the god realm. If a teacher has charisma and the ability to channel and radiate intense energy, this power may be misused to engender hope in students and to bind them in a dependent relationship. The true teacher undermines hope, teaches by the example of wisdom and compassion, and encourages students to be autonomous by investigating truth themselves, checking their own experience, and trusting their own results more than faith.

The teacher is not a god but a bridge to the unknown, a guide to the awareness qualities and energy capacities we want for our spiritual growth. The teacher, who is the same as we are, demonstrates what is possible in terms of aliveness and how to use the path of compassion to become free. In a sense, the teacher touches both aspects of our being: our everyday life of habits and feelings on the one hand and our awakened aliveness and wisdom on the other. While respect for and openness to the teacher are important for our growth and freedom, blind devotion fixates us on the person of the teacher. We then become confined by the limitations of the teacher’s personality rather than liberated by the teachings.

False Transcendence

Many characteristics of this realm–creative imagination, the tendency to go beyond assumed reality and individual perspectives, and the sense of expansiveness–are close to the underlying dynamic of wonderment. In wonder, we find the wisdom qualities of openness, true bliss, the realization of spaciousness within which all things arise, and alignment with universal principles. The god realm attitude results in superficial experiences that fit our preconceptions of realization but that lack the authenticity of wonder and the grounding in compassion and freedom.

Because the realm itself seems to offer transcendence, this is one of the most difficult realms to transcend. The heart posture of the realm propels us to transcend conflict and problems until we are comfortable. The desire for inner comfort, rather than for an authentic openness to the unknown, governs our quest. But many feelings arise during the true process of realization. At certain stages there is pain and disorientation, and at others a kind of bliss that may make us feel like we are going to burst (if there was something or someone to burst). When we settle for comfort we settle for the counterfeit of realization–the relief and pride we feel when we think we understand something.

Because we think that whatever makes us feel good is correct, we ignore disturbing events, information, and people and anything else that does not fit into our view of the world. We elevate ignorance to a form of bliss by excluding from our attention everything that is non-supportive.

Preoccupied with self, with grandiosity, and with the power and radiance of our own being, we resist the mystery of the unknown. When we are threatened by the unknown, we stifle the natural dynamic of wonder that arises in relation to all that is beyond our self-intoxication. We must either include vast space and the unknown within our sense of ourselves or ignore it because we do not want to feel insignificant and small. Our sense of awe before the forces of grace cannot be acknowledged for fear of invalidating our self-image.

Above the Law

According to our self-serving point of view, we are above the laws of nature and of humankind. We think that, as long as what we do seems reasonable to us, it is appropriate. We are accountable to ourselves and not to other people, the environment, or society. Human history is filled with examples of people in politics, business, and religion who demonstrated this attitude and caused enormous suffering.

Unlike the titans who struggle with death, we, as gods, know that death is not really real. We take comfort in the thought that “death is an illusion.” The only people who die are those who are stuck and have not come to the true inner place beyond time, change, and death. We may even believe that we have the potential to develop our bodies and minds to such a degree that we can reverse the aging process and become one of the “immortals.”

A man, walking on a beach, reaches down and picks up a pebble. Looking at the small stone in his hand, he feels very powerful and thinks of how with one stroke he has taken control of the stone. “How many years have you been here, and now I place you in my hand.” The pebble speaks to him, “Though to you, I am only a grain of sand in your hand, you, to me, are but a passing breeze.”

Mental Health as an EA Cause: Key Questions

Michale Johnosn and I will be hanging out at the EA Global (SF) 2017 conference this weekend representing the Qualia Research Institute. If you see us and want to chat, please feel free to approach us. This is what we look like:

13920483_1094117364013753_6812328805047750006_o

At EAGlobal 2016 at Berkeley

I will be handing out the following flyer:


Mental Health as an EA Cause Area: Key Questions

  1. What makes a state of consciousness feel good or bad?
  2. What percentage of worldwide suffering is directly caused by mental illness and/or the hedonic treadmill rather than by external circumstances?
  3. Is there a way to “sabotage the hedonic treadmill”?
  4. Can benevolent and intelligent sentient beings be fully animated by gradients of bliss (offloading nociception to insentient mechanism)?
  5. Can we uproot the fundamental causes of suffering by tweaking our brain structure without compromising our critical thinking?
  6. Can consciousness technologies play a part in making the world a high-trust super-organism?

symmetries

Wallpaper symmetry chart with 5 different notations (slightly different diagram in handout)

If these questions intrigue you, you are likely to find the following readings valuable:

  1. Principia Qualia
  2. Qualia Computing So Far
  3. Quantifying Bliss: Talk Summary
  4. The Tyranny of the Intentional Object
  5. Algorithmic Reduction of Psychedelic States
  6. How to secretly communicate with people on LSD
  7. ELI5 “The Hyperbolic Geometry of DMT Experiences”
  8. Peaceful Qualia: The Manhattan Project of Consciousness
  9. Symmetry Theory of Valence “Explain Like I’m 5” edition
  10. Generalized Wada Test and the Total Order of Consciousness
  11. Wireheading Done Right: Stay Positive Without Going Insane
  12. Why we seek out pleasure: the Symmetry Theory of Homeostatic Regulation
  13. The Hyperbolic Geometry of DMT Experiences: Symmetries, Sheets, and Saddled Scenes

Who we are:
Qualia Research Institute (Michael Johnson & Andrés Gómez Emilsson)
Qualia Computing (this website; Andrés Gómez Emilsson)
Open Theory (Michael Johnson)

Printable version:

mental_health_as_ea_cause

24 Predictions for the Year 3000 by David Pearce

In response to the Quora question Looking 1000 years into the future and assuming the human race is doing well, what will society be like?, David Pearce wrote:


The history of futurology to date makes sobering reading. Prophecies tend to reveal more about the emotional and intellectual limitations of the author than the future. […]
But here goes…

Year 3000

1) Superhuman bliss.

Mastery of our reward circuitry promises a future of superhuman bliss – gradients of genetically engineered well-being orders of magnitude richer than today’s “peak experiences”.
Superhappiness?
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3274778/

2) Eternal youth.

More strictly, indefinitely extended youth and effectively unlimited lifespans. Transhumans, humans and their nonhuman animal companions don’t grow old and perish. Automated off-world backups allow restoration and “respawning” in case of catastrophic accidents. “Aging” exists only in the medical archives.
SENS Research Foundation – Wikipedia

3) Full-spectrum superintelligences.

A flourishing ecology of sentient nonbiological quantum computers, hyperintelligent digital zombies and full-spectrum transhuman “cyborgs” has radiated across the Solar System. Neurochipping makes superintelligence all-pervasive. The universe seems inherently friendly: ubiquitous AI underpins the illusion that reality conspires to help us.
Superintelligence: Paths, Dangers, Strategies – Wikipedia
Artificial Intelligence @ MIRI
Kurzweil Accelerating Intelligence
Supersentience

4) Immersive VR.

“Magic” rules. “Augmented reality” of earlier centuries has been largely superseded by hyperreal virtual worlds with laws, dimensions, avatars and narrative structures wildly different from ancestral consensus reality. Selection pressure in the basement makes complete escape into virtual paradises infeasible. For the most part, infrastructure maintenance in basement reality has been delegated to zombie AI.
Augmented reality – Wikipedia
Virtual reality – Wikipedia

5) Transhuman psychedelia / novel state spaces of consciousness.

Analogues of cognition, volition and emotion as conceived by humans have been selectively retained, though with a richer phenomenology than our thin logico-linguistic thought. Other fundamental categories of mind have been discovered via genetic tinkering and pharmacological experiment. Such novel faculties are intelligently harnessed in the transhuman CNS. However, the ordinary waking consciousness of Darwinian life has been replaced by state-spaces of mind physiologically inconceivable to Homo sapiens. Gene-editing tools have opened up modes of consciousness that make the weirdest human DMT trip akin to watching paint dry. These disparate states-spaces of consciousness do share one property: they are generically blissful. “Bad trips” as undergone by human psychonauts are physically impossible because in the year 3000 the molecular signature of experience below “hedonic zero” is missing.
ShulginResearch.org
Qualia Computing

6) Supersentience / ultra-high intensity experience.

The intensity of everyday experience surpasses today’s human imagination. Size doesn’t matter to digital data-processing, but bigger brains with reprogrammed, net-enabled neurons and richer synaptic connectivity can exceed the maximum sentience of small, simple, solipsistic mind-brains shackled by the constraints of the human birth-canal. The theoretical upper limits to phenomenally bound mega-minds, and the ultimate intensity of experience, remain unclear. Intuitively, humans have a dimmer-switch model of consciousness – with e.g. ants and worms subsisting with minimal consciousness and humans at the pinnacle of the Great Chain of Being. Yet Darwinian humans may resemble sleepwalkers compared to our fourth-millennium successors. Today we say we’re “awake”, but mankind doesn’t understand what “posthuman intensity of experience” really means.
What earthly animal comes closest to human levels of sentience?

7) Reversible mind-melding.

Early in the twenty-first century, perhaps the only people who know what it’s like even partially to share a mind are the conjoined Hogan sisters. Tatiana and Krista Hogan share a thalamic bridge. Even mirror-touch synaesthetes can’t literally experience the pains and pleasures of other sentient beings. But in the year 3000, cross-species mind-melding technologies – for instance, sophisticated analogues of reversible thalamic bridges – and digital analogs of telepathy have led to a revolution in both ethics and decision-theoretic rationality.
Could Conjoined Twins Share a Mind?
Mirror-touch synesthesia – Wikipedia
Ecstasy : Utopian Pharmacology

8) The Anti-Speciesist Revolution / worldwide veganism/invitrotarianism.

Factory-farms, slaughterhouses and other Darwinian crimes against sentience have passed into the dustbin of history. Omnipresent AI cares for the vulnerable via “high-tech Jainism”. The Anti-Speciesist Revolution has made arbitrary prejudice against other sentient beings on grounds of species membership as perversely unthinkable as discrimination on grounds of ethnic group. Sentience is valued more than sapience, the prerogative of classical digital zombies (“robots”).
What is High-tech Jainism?
The Antispeciesist Revolution
‘Speciesism: Why It Is Wrong and the Implications of Rejecting It’

9) Programmable biospheres.

Sentient beings help rather than harm each other. The successors of today’s primitive CRISPR genome-editing and synthetic gene drive technologies have reworked the global ecosystem. Darwinian life was nasty, brutish and short. Extreme violence and useless suffering were endemic. In the year 3000, fertility regulation via cross-species immunocontraception has replaced predation, starvation and disease to regulate ecologically sustainable population sizes in utopian “wildlife parks”. The free-living descendants of “charismatic mega-fauna” graze happily with neo-dinosaurs, self-replicating nanobots, and newly minted exotica in surreal garden of edens. Every cubic metre of the biosphere is accessible to benign supervision – “nanny AI” for humble minds who haven’t been neurochipped for superintelligence. Other idyllic biospheres in the Solar System have been programmed from scratch.
CRISPR – Wikipedia
Genetically designing a happy biosphere
Our Biotech Future

10) The formalism of the TOE is known.
(details omitteddoes Quora support LaTeX?)

Dirac recognised the superposition principle as the fundamental principle of quantum mechanics. Wavefunction monists believe the superposition principle holds the key to reality itself. However – barring the epoch-making discovery of a cosmic Rosetta stone – the implications of some of the more interesting solutions of the master equation for subjective experience are still unknown.
Theory of everything – Wikipedia
M-theory – Wikipedia
Why does the universe exist? Why is there something rather than nothing?
Amazon.com: The Wave Function: Essays on the Metaphysics of Quantum Mechanics (9780199790548): Alyssa Ney, David Z Albert: Books

11) The Hard Problem of consciousness is solved.

The Hard Problem of consciousness was long reckoned insoluble. The Standard Model in physics from which (almost) all else springs was a bit of a mess but stunningly empirically successful at sub-Planckian energy regimes. How could physicalism and the ontological unity of science be reconciled with the existence, classically impossible binding, causal-functional efficacy and diverse palette of phenomenal experience? Mankind’s best theory of the world was inconsistent with one’s own existence, a significant shortcoming. However, all classical- and quantum-mind conjectures with predictive power had been empirically falsified by 3000 – with one exception.
Physicalism – Wikipedia
Quantum Darwinism – Wikipedia
Consciousness (Stanford Encyclopedia of Philosophy)
Hard problem of consciousness – Wikipedia
Integrated information theory – Wikipedia
Principia Qualia
Dualism – Wikipedia
New mysterianism – Wikipedia
Quantum mind – Wikipedia

[Which theory is most promising? As with the TOE, you’ll forgive me for skipping the details. In any case, my ideas are probably too idiosyncratic to be of wider interest, but for anyone curious: What is the Quantum Mind?]

12) The Meaning of Life resolved.

Everyday life is charged with a profound sense of meaning and significance. Everyone feels valuable and valued. Contrast the way twenty-first century depressives typically found life empty, absurd or meaningless; and how even “healthy” normals were sometimes racked by existential angst. Or conversely, compare how people with bipolar disorder experienced megalomania and messianic delusions when uncontrollably manic. Hyperthymic civilization in the year 3000 records no such pathologies of mind or deficits in meaning. Genetically preprogrammed gradients of invincible bliss ensure that all sentient beings find life self-intimatingly valuable. Transhumans love themselves, love life, and love each other.
https://www.transhumanism.com/

13) Beautiful new emotions.

Nasty human emotions have been retired – with or without the recruitment of functional analogs to play their former computational role. Novel emotions have been biologically synthesised and their “raw feels” encephalised and integrated into the CNS. All emotion is beautiful. The pleasure axis has replaced the pleasure-pain axis as the engine of civilised life.
An information-theoretic perspective on life in Heaven

14) Effectively unlimited material abundance / molecular nanotechnology.

Status goods long persisted in basement reality, as did relics of the cash nexus on the blockchain. Yet in a world where both computational resources and the substrates of pure bliss aren’t rationed, such ugly evolutionary hangovers first withered, then died.
http://metamodern.com/about-the-author/
Blockchain – Wikipedia

15) Posthuman aesthetics / superhuman beauty.

The molecular signatures of aesthetic experience have been identified, purified and overexpressed. Life is saturated with superhuman beauty. What passed for “Great Art” in the Darwinian era is no more impressive than year 2000 humans might judge, say, a child’s painting by numbers or Paleolithic daubings and early caveporn. Nonetheless, critical discernment is retained. Transhumans are blissful but not “blissed out” – or not all of them at any rate.
Art – Wikipedia
http://www.sciencemag.org/news/2009/05/earliest-pornography

16) Gender transformation.

Like gills or a tail, “gender” in the human sense is a thing of the past. We might call some transhuman minds hyper-masculine (the “ultrahigh AQ” hyper-systematisers), others hyperfeminine (“ultralow AQ” hyper-empathisers), but transhuman cognitive styles transcend such crude dichotomies, and can be shifted almost at will via embedded AI. Many transhumans are asexual, others pan-sexual, a few hypersexual, others just sexually inquisitive. “The degree and kind of a man’s sexuality reach up into the ultimate pinnacle of his spirit”, said Nietzsche – which leads to (17).

Object Sexuality – Wikipedia
Empathizing & Systematizing Theory – Wikipedia
https://www.livescience.com/2094-homosexuality-turned-fruit-flies.html
https://www.wired.com/2001/12/aqtest/

17) Physical superhealth.

In 3000, everyone feels physically and psychologically “better than well”. Darwinian pathologies of the flesh such as fatigue, the “leaden paralysis” of chronic depressives, and bodily malaise of any kind are inconceivable. The (comparatively) benign “low pain” alleles of the SCN9A gene that replaced their nastier ancestral cousins have been superseded by AI-based nociception with optional manual overrides. Multi-sensory bodily “superpowers” are the norm. Everyone loves their body-images in virtual and basement reality alike. Morphological freedom is effectively unbounded. Awesome robolovers, nights of superhuman sensual passion, 48-hour whole-body orgasms, and sexual practices that might raise eyebrows among prudish Darwinians have multiplied. Yet life isn’t a perpetual orgy. Academic subcultures pursue analogues of Mill’s “higher pleasures”. Paradise engineering has become a rigorous discipline. That said, a lot of transhumans are hedonists who essentially want to have superhuman fun. And why not?
https://www.wired.com/2017/04/the-cure-for-pain/
http://io9.gizmodo.com/5946914/should-we-eliminate-the-human-ability-to-feel-pain
http://www.bbc.com/future/story/20140321-orgasms-at-the-push-of-a-button

18) World government.

Routine policy decisions in basement reality have been offloaded to ultra-intelligent zombie AI. The quasi-psychopathic relationships of Darwinian life – not least the zero-sum primate status-games of the African savannah – are ancient history. Some conflict-resolution procedures previously off-loaded to AI have been superseded by diplomatic “mind-melds”. In the words of Henry Wadsworth Longfellow, “If we could read the secret history of our enemies, we should find in each man’s life sorrow and suffering enough to disarm all hostility.” Our descendants have windows into each other’s souls, so to speak.

19) Historical amnesia.

The world’s last experience below “hedonic zero” marked a major evolutionary transition in the evolutionary development of life. In 3000, the nature of sub-zero states below Sidgwick’s “natural watershed” isn’t understood except by analogy: some kind of phase transition in consciousness below life’s lowest hedonic floor – a hedonic floor that is being genetically ratcheted upwards as life becomes ever more wonderful. Transhumans are hyper-empathetic. They get off on each other’s joys. Yet paradoxically, transhuman mental superhealth depends on biological immunity to true comprehension of the nasty stuff elsewhere in the universal wavefunction that even mature superintelligence is impotent to change. Maybe the nature of e.g. Darwinian life, and the minds of malaise-ridden primitives in inaccessible Everett branches, doesn’t seem any more interesting than we find books on the Dark Ages. Negative utilitarianism, if it were conceivable, might be viewed as a depressive psychosis. “Life is suffering”, said Gautama Buddha, but fourth millennials feel in the roots of their being that Life is bliss.
Invincible ignorance? Perhaps.
Negative Utilitarianism – Wikipedia

20) Super-spirituality.

A tough one to predict. But neuroscience can soon identify the molecular signatures of spiritual experience, refine them, and massively amplify their molecular substrates. Perhaps some fourth millennials enjoy lifelong spiritual ecstasies beyond the mystical epiphanies of temporal-lobe epileptics. Secular rationalists don’t know what we’re missing.
https://www.newscientist.com/article/mg22129531-000-ecstatic-epilepsy-how-seizures-can-be-bliss/

21) The Reproductive Revolution.
Reproduction is uncommon in a post-aging society. Most transhumans originate as extra-uterine “designer babies”. The reckless genetic experimentation of sexual reproduction had long seemed irresponsible. Old habits still died hard. By year 3000, the genetic crapshoot of Darwinian life has finally been replaced by precision-engineered sentience. Early critics of “eugenics” and a “Brave New World” have discovered by experience that a “triple S” civilisation of superhappiness, superlongevity and superintelligence isn’t as bad as they supposed.
https://www.reproductive-revolution.com/
https://www.huxley.net/

22) Globish (“English Plus”).

Automated real-time translation has been superseded by a common tongue – Globish – spoken, written or “telepathically” communicated. Partial translation manuals for mutually alien state-spaces of consciousness exist, but – as twentieth century Kuhnians would have put it – such state-spaces tend to be incommensurable and their concepts state-specific. Compare how poorly lucid dreamers can communicate with “awake” humans. Many Darwinian terms and concepts are effectively obsolete. In their place, active transhumanist vocabularies of millions of words are common. “Basic Globish” is used for communication with humble minds, i.e. human and nonhuman animals who haven’t been fully uplifted.
Incommensurability – SEoP
Uplift (science_fiction) – Wikipedia

23) Plans for Galactic colonization.

Terraforming and 3D-bioprinting of post-Darwinian life on nearby solar systems is proceeding apace. Vacant ecological niches tend to get filled. In earlier centuries, a synthesis of cryonics, crude reward pathway enhancements and immersive VR software, combined with revolutionary breakthroughs in rocket propulsion, led to the launch of primitive manned starships. Several are still starbound. Some transhuman utilitarian ethicists and policy-makers favour creating a utilitronium shockwave beyond the pale of civilisation to convert matter and energy into pure pleasure. Year 3000 bioconservatives focus on promoting life animated by gradients of superintelligent bliss. Yet no one objects to pure “hedonium” replacing unprogrammed matter.
Interstellar Travel – Wikipedia
Utilitarianism – Wikipedia

24) The momentous “unknown unknown”.

If you read a text and the author’s last words are “and then I woke up”, everything you’ve read must be interpreted in a new light – semantic holism with a vengeance. By the year 3000, some earth-shattering revelation may have changed everything – some fundamental background assumption of earlier centuries has been overturned that might not have been explicitly represented in our conceptual scheme. If it exists, then I’ve no inkling what this “unknown unknown” might be, unless it lies hidden in the untapped subjective properties of matter and energy. Christian readers might interject “The Second Coming”. Learning the Simulation Hypothesis were true would be a secular example of such a revelation. Some believers in an AI “Intelligence Explosion” speak delphically of “The Singularity”. Whatever – Shakespeare made the point more poetically, “There are more things in heaven and earth, Horatio, Than are dreamt of in your philosophy”.

As it stands, yes, (24) is almost vacuous. Yet compare how the philosophers of classical antiquity who came closest to recognising their predicament weren’t intellectual titans like Plato or Aristotle, but instead the radical sceptics. The sceptics guessed they were ignorant in ways that transcended the capacity of their conceptual scheme to articulate. By the lights of the fourth millennium, what I’m writing, and what you’re reading, may be stultified by something that humans don’t know and can’t express.
Ancient Skepticism – SEoP

**********************************************************************

OK, twenty-four predictions! Successful prophets tend to locate salvation or doom within the credible lifetime of their intended audience. The questioner asks about life in the year 3000 rather than, say, a Kurzweilian 2045. In my view, everyone reading this text will grow old and die before the predictions of this answer are realised or confounded – with one possible complication.

Opt-out cryonics and opt-in cryothanasia are feasible long before the conquest of aging. Visiting grandpa in the cryonics facility can turn death into an event in life. I’m not convinced that posthuman superintelligence will reckon that Darwinian malware should be revived in any shape or form. Yet if you want to wake up one morning in posthuman paradise – and I do see the appeal – then options exist:
http://www.alcor.org/

********************************************************************
p.s. I’m curious about the credence (if any) the reader would assign to the scenarios listed here.

Why I think the Foundational Research Institute should rethink its approach

by Mike Johnson

The following is my considered evaluation of the Foundational Research Institute, circa July 2017. I discuss its goal, where I foresee things going wrong with how it defines suffering, and what it could do to avoid these problems.

TL;DR version: functionalism (“consciousness is the sum-total of the functional properties of our brains”) sounds a lot better than it actually turns out to be in practice. In particular, functionalism makes it impossible to define ethics & suffering in a way that can mediate disagreements.

I. What is the Foundational Research Institute?

The Foundational Research Institute (FRI) is a Berlin-based group that “conducts research on how to best reduce the suffering of sentient beings in the near and far future.” Executive Director Max Daniel introduced them at EA Global Boston as “the only EA organization which at an organizational level has the mission of focusing on reducing s-risk.” S-risks are, according to Daniel, “risks where an adverse outcome would bring about suffering on an astronomical scale, vastly exceeding all suffering that has existed on Earth so far.”

Essentially, FRI wants to become the research arm of suffering-focused ethics, and help prevent artificial general intelligence (AGI) failure-modes which might produce suffering on a cosmic scale.

What I like about FRI:

While I have serious qualms about FRI’s research framework, I think the people behind FRI deserve a lot of credit- they seem to be serious people, working hard to build something good. In particular, I want to give them a shoutout for three things:

  • First, FRI takes suffering seriously, and I think that’s important. When times are good, we tend to forget how tongue-chewingly horrific suffering can be. S-risks seem particularly horrifying.
  • Second, FRI isn’t afraid of being weird. FRI has been working on s-risk research for a few years now, and if people are starting to come around to the idea that s-risks are worth thinking about, much of the credit goes to FRI.
  • Third, I have great personal respect for Brian Tomasik, one of FRI’s co-founders. I’ve found him highly thoughtful, generous in debates, and unfailingly principled. In particular, he’s always willing to bite the bullet and work ideas out to their logical end, even if it involves repugnant conclusions.

What is FRI’s research framework?

FRI believes in analytic functionalism, or what David Chalmers calls “Type-A materialism”. Essentially, what this means is there’s no ’theoretical essence’ to consciousness; rather, consciousness is the sum-total of the functional properties of our brains. Since ‘functional properties’ are rather vague, this means consciousness itself is rather vague, in the same way words like “life,” “justice,” and “virtue” are messy and vague.

Brian suggests that this vagueness means there’s an inherently subjective, perhaps arbitrary element to how we define consciousness:

Analytic functionalism looks for functional processes in the brain that roughly capture what we mean by words like “awareness”, “happy”, etc., in a similar way as a biologist may look for precise properties of replicators that roughly capture what we mean by “life”. Just as there can be room for fuzziness about where exactly to draw the boundaries around “life”, different analytic functionalists may have different opinions about where to define the boundaries of “consciousness” and other mental states. This is why consciousness is “up to us to define”. There’s no hard problem of consciousness for the same reason there’s no hard problem of life: consciousness is just a high-level word that we use to refer to lots of detailed processes, and it doesn’t mean anything in addition to those processes.

Finally, Brian argues that the phenomenology of consciousness is identical with the phenomenology of computation:

I know that I’m conscious. I also know, from neuroscience combined with Occam’s razor, that my consciousness consists only of material operations in my brain — probably mostly patterns of neuronal firing that help process inputs, compute intermediate ideas, and produce behavioral outputs. Thus, I can see that consciousness is just the first-person view of certain kinds of computations — as Eliezer Yudkowsky puts it, “How An Algorithm Feels From Inside“. Consciousness is not something separate from or epiphenomenal to these computations. It is these computations, just from their own perspective of trying to think about themselves.

 

In other words, consciousness is what minds compute. Consciousness is the collection of input operations, intermediate processing, and output behaviors that an entity performs.

And if consciousness is all these things, so too is suffering. Which means suffering is computational, yet also inherently fuzzy, and at least a bit arbitrary; a leaky high-level reification impossible to speak about accurately, since there’s no formal, objective “ground truth”.

II. Why do I worry about FRI’s research framework?

In short, I think FRI has a worthy goal and good people, but its metaphysics actively prevent making progress toward that goal. The following describes why I think that, drawing heavily on Brian’s writings (of FRI’s researchers, Brian seems the most focused on metaphysics):

Note: FRI is not the only EA organization which holds functionalist views on consciousness; much of the following critique would also apply to e.g. MIRI, FHI, and OpenPhil. I focus on FRI because (1) Brian’s writings on consciousness & functionalism have been hugely influential in the community, and are clear enough *to* criticize; (2) the fact that FRI is particularly clear about what it cares about- suffering- allows a particularly clear critique about what problems it will run into with functionalism; (3) I believe FRI is at the forefront of an important cause area which has not crystallized yet, and I think it’s critically important to get these objections bouncing around this subcommunity.

Objection 1: Motte-and-bailey

Brian: “Consciousness is not a thing which exists ‘out there’ or even a separate property of matter; it’s a definitional category into which we classify minds. ‘Is this digital mind really conscious?’ is analogous to ‘Is a rock that people use to eat on really a table?’ [However,] That consciousness is a cluster in thingspace rather than a concrete property of the world does not make reducing suffering less important.”

The FRI model seems to imply that suffering is ineffable enough such that we can’t have an objective definition, yet sufficiently effable that we can coherently talk and care about it. This attempt to have it both ways seems contradictory, or at least in deep tension.

Indeed, I’d argue that the degree to which you can care about something is proportional to the degree to which you can define it objectively. E.g., If I say that “gnireffus” is literally the most terrible thing in the cosmos, that we should spread gnireffus-focused ethics, and that minimizing g-risks (far-future scenarios which involve large amounts of gnireffus) is a moral imperative, but also that what is and what and isn’t gnireffus is rather subjective with no privileged definition, and it’s impossible to objectively tell if a physical system exhibits gnireffus, you might raise any number of objections. This is not an exact metaphor for FRI’s position, but I worry that FRI’s work leans on the intuition that suffering is real and we can speak coherently about it, to a degree greater than its metaphysics formally allow.

Max Daniel (personal communication) suggests that we’re comfortable with a degree of ineffability in other contexts; “Brian claims that the concept of suffering shares the allegedly problematic properties with the concept of a table. But it seems a stretch to say that the alleged tension is problematic when talking about tables. So why would it be problematic when talking about suffering?” However, if we take the anti-realist view that suffering is ‘merely’ a node in the network of language, we have to live with the consequences of this: that ‘suffering’ will lose meaning as we take it away from the network in which it’s embedded (Wittgenstein). But FRI wants to do exactly this, to speak about suffering in the context of AGIs, simulated brains, even video game characters.

We can be anti-realists about suffering (suffering-is-a-node-in-the-network-of-language), or we can argue that we can talk coherently about suffering in novel contexts (AGIs, mind crime, aliens, and so on), but it seems inherently troublesome to claim we can do both at the same time.

Objection 2: Intuition duels

Two people can agree on FRI’s position that there is no objective fact of the matter about what suffering is (no privileged definition), but this also means they have no way of coming to any consensus on the object-level question of whether something can suffer. This isn’t just an academic point: Brian has written extensively about how he believes non-human animals can and do suffer extensively, whereas Yudkowsky (who holds computationalist views, like Brian) has written about how he’s confident that animals are not conscious and cannot suffer, due to their lack of higher-order reasoning.

And if functionalism is having trouble adjudicating the easy cases of suffering–whether monkeys can suffer, or whether dogs can— it doesn’t have a sliver of a chance at dealing with the upcoming hard cases of suffering: whether a given AGI is suffering, or engaging in mind crime; whether a whole-brain emulation (WBE) or synthetic organism or emergent intelligence that doesn’t have the capacity to tell us how it feels (or that we don’t have the capacity to understand) is suffering; if any aliens that we meet in the future can suffer; whether changing the internal architecture of our qualia reports means we’re also changing our qualia; and so on.

In short, FRI’s theory of consciousness isn’t actually a theory of consciousness at all, since it doesn’t do the thing we need a theory of consciousness to do: adjudicate disagreements in a principled way. Instead, it gives up any claim on the sorts of objective facts which could in principle adjudicate disagreements.

This is a source of friction in EA today, but it’s mitigated by the sense that

(1) The EA pie is growing, so it’s better to ignore disagreements than pick fights;

(2) Disagreements over the definition of suffering don’t really matter yet, since we haven’t gotten into the business of making morally-relevant synthetic beings (that we know of) that might be unable to vocalize their suffering.

If the perception of one or both of these conditions change, the lack of some disagreement-adjudicating theory of suffering will matter quite a lot.

Objection 3: Convergence requires common truth

Mike: “[W]hat makes one definition of consciousness better than another? How should we evaluate them?”

Brian: “Consilience among our feelings of empathy, principles of non-discrimination, understandings of cognitive science, etc. It’s similar to the question of what makes one definition of justice or virtue better than another.”

Brian is hoping that affective neuroscience will slowly converge to accurate views on suffering as more and better data about sentience and pain accumulates. But convergence to truth implies something (objective) driving the convergence- in this way, Brian’s framework still seems to require an objective truth of the matter, even though he disclaims most of the benefits of assuming this.

Objection 4: Assuming that consciousness is a reification produces more confusion, not less

Brian: “Consciousness is not a reified thing; it’s not a physical property of the universe that just exists intrinsically. Rather, instances of consciousness are algorithms that are implemented in specific steps. … Consciousness involves specific things that brains do.”

Brian argues that we treat conscious/phenomenology as more ‘real’ than it is. Traditionally, whenever we’ve discovered something is a leaky reification and shouldn’t be treated as ‘too real’, we’ve been able to break it down into more coherent constituent pieces we can treat as real. Life, for instance, wasn’t due to élan vital but a bundle of self-organizing properties & dynamics which generally co-occur. But carrying out this “de-reification” process on consciousness– enumerating its coherent constituent pieces– has proven difficult, especially if we want to preserve some way to speak cogently about suffering.

Speaking for myself, the more I stared into the depths of functionalism, the less certain everything about moral value became– and arguably, I see the same trajectory in Brian’s work and Luke Muehlhauser’s report. Their model uncertainty has seemingly become larger as they’ve looked into techniques for how to “de-reify” consciousness while preserving some flavor of moral value, not smaller. Brian and Luke seem to interpret this as evidence that moral value is intractably complicated, but this is also consistent with consciousness not being a reification, and instead being a real thing. Trying to “de-reify” something that’s not a reification will produce deep confusion, just as surely trying to treat a reification as ‘more real’ than it actually is will.

Edsger W. Dijkstra famously noted that “The purpose of abstraction is not to be vague, but to create a new semantic level in which one can be absolutely precise.” And so if our ways of talking about moral value fail to ‘carve reality at the joints’- then by all means let’s build better ones, rather than giving up on precision.

Objection 5: The Hard Problem of Consciousness is a red herring

Brian spends a lot of time discussing Chalmers’ “Hard Problem of Consciousness”, i.e. the question of why we’re subjectively conscious, and seems to base at least part of his conclusion on not finding this question compelling— he suggests “There’s no hard problem of consciousness for the same reason there’s no hard problem of life: consciousness is just a high-level word that we use to refer to lots of detailed processes, and it doesn’t mean anything in addition to those processes.” I.e., no ‘why’ is necessary; when we take consciousness and subtract out the details of the brain, we’re left with an empty set.

But I think the “Hard Problem” isn’t helpful as a contrastive centerpiece, since it’s unclear what the problem is, and whether it’s analytic or empirical, a statement about cognition or about physics. At the Qualia Research Institute (QRI), we don’t talk much about the Hard Problem; instead, we talk about Qualia Formalism, or the idea that any phenomenological state can be crisply and precisely represented by some mathematical object. I suspect this would be a better foil for Brian’s work than the Hard Problem.

Objection 6: Mapping to reality

Brian argues that consciousness should be defined at the functional/computational level: given a Turing machine, or neural network, the right ‘code’ will produce consciousness. But the problem is that this doesn’t lead to a theory which can ‘compile’ to physics. Consider the following:

Imagine you have a bag of popcorn. Now shake it. There will exist a certain ad-hoc interpretation of bag-of-popcorn-as-computational-system where you just simulated someone getting tortured, and other interpretations that don’t imply that. Did you torture anyone? If you’re a computationalist, no clear answer exists- you both did, and did not, torture someone. This sounds like a ridiculous edge-case that would never come up in real life, but in reality it comes up all the time, since there is no principled way to *objectively derive* what computation(s) any physical system is performing.

I don’t think this is an outlandish view of functionalism; Brian suggests much the same in How to Interpret a Physical System as a Mind“Physicalist views that directly map from physics to moral value are relatively simple to understand. Functionalism is more complex, because it maps from physics to computations to moral value. Moreover, while physics is real and objective, computations are fictional and ‘observer-relative’ (to use John Searle’s terminology). There’s no objective meaning to ‘the computation that this physical system is implementing’ (unless you’re referring to the specific equations of physics that the system is playing out).”

Gordon McCabe (McCabe 2004) provides a more formal argument to this effect— that precisely mapping between physical processes and (Turing-level) computational processes is inherently impossible— in the context of simulations. First, McCabe notes that:

[T]here is a one-[to-]many correspondence between the logical states [of a computer] and the exact electronic states of computer memory. Although there are bijective mappings between numbers and the logical states of computer memory, there are no bijective mappings between numbers and the exact electronic states of memory.

This lack of an exact bijective mapping means that subjective interpretation necessarily creeps in, and so a computational simulation of a physical system can’t be ‘about’ that system in any rigorous way:

In a computer simulation, the values of the physical quantities possessed by the simulated system are represented by the combined states of multiple bits in computer memory. However, the combined states of multiple bits in computer memory only represent numbers because they are deemed to do so under a numeric interpretation. There are many different interpretations of the combined states of multiple bits in computer memory. If the numbers represented by a digital computer are interpretation-dependent, they cannot be objective physical properties. Hence, there can be no objective relationship between the changing pattern of multiple bit-states in computer memory, and the changing pattern of quantity-values of a simulated physical system.

McCabe concludes that, metaphysically speaking,

A digital computer simulation of a physical system cannot exist as, (does not possess the properties and relationships of), anything else other than a physical process occurring upon the components of a computer. In the contemporary case of an electronic digital computer, a simulation cannot exist as anything else other than an electronic physical process occurring upon the components and circuitry of a computer.

Where does this leave ethics? In Flavors of Computation Are Flavors of Consciousness, Brian notes that “In some sense all I’ve proposed here is to think of different flavors of computation as being various flavors of consciousness. But this still leaves the question: Which flavors of computation matter most? Clearly whatever computations happen when a person is in pain are vastly more important than what’s happening in a brain on a lazy afternoon. How can we capture that difference?”

But if Brian grants the former point- that “There’s no objective meaning to ‘the computation that this physical system is implementing’”– then this latter task of figuring out “which flavors of computation matter most” is provably impossible. There will always be multiple computational (and thus ethical) interpretations of a physical system, with no way to figure out what’s “really” happening. No way to figure out if something is suffering or not. No consilience; not now, not ever.

Note: despite apparently granting the point above, Brian also remarks that:

I should add a note on terminology: All computations occur within physics, so any computation is a physical process. Conversely, any physical process proceeds from input conditions to output conditions in a regular manner and so is a computation. Hence, the set of computations equals the set of physical processes, and where I say “computations” in this piece, one could just as well substitute “physical processes” instead.

This seems to be (1) incorrect, for the reasons I give above, or (2) taking substantial poetic license with these terms, or (3) referring to hypercomputation (which might be able to salvage the metaphor, but would invalidate many of FRI’s conclusions dealing with the computability of suffering on conventional hardware).

This objection may seem esoteric or pedantic, but I think it’s important, and that it ripples through FRI’s theoretical framework with disastrous effects.

 

Objection 7: FRI doesn’t fully bite the bullet on computationalism

Brian suggests that “flavors of computation are flavors of consciousness” and that some computations ‘code’ for suffering. But if we do in fact bite the bullet on this metaphor and place suffering within the realm of computational theory, we need to think in “near mode” and accept all the paradoxes that brings. Scott Aaronson, a noted expert on quantum computing, raises the following objections to functionalism:

I’m guessing that many people in this room side with Dennett, and (not coincidentally, I’d say) also with Everett. I certainly have sympathies in that direction too. In fact, I spent seven or eight years of my life as a Dennett/Everett hardcore believer. But, while I don’t want to talk anyone out of the Dennett/Everett view, I’d like to take you on a tour of what I see as some of the extremely interesting questions that that view leaves unanswered. I’m not talking about “deep questions of meaning,” but about something much more straightforward: what exactly does a computational process have to do to qualify as “conscious”?

 

 

There’s this old chestnut, what if each person on earth simulated one neuron of your brain, by passing pieces of paper around. It took them several years just to simulate a single second of your thought processes. Would that bring your subjectivity into being? Would you accept it as a replacement for your current body? If so, then what if your brain were simulated, not neuron-by-neuron, but by a gigantic lookup table? That is, what if there were a huge database, much larger than the observable universe (but let’s not worry about that), that hardwired what your brain’s response was to every sequence of stimuli that your sense-organs could possibly receive. Would that bring about your consciousness? Let’s keep pushing: if it would, would it make a difference if anyone actually consulted the lookup table? Why can’t it bring about your consciousness just by sitting there doing nothing?

To these standard thought experiments, we can add more. Let’s suppose that, purely for error-correction purposes, the computer that’s simulating your brain runs the code three times, and takes the majority vote of the outcomes. Would that bring three “copies” of your consciousness into being? Does it make a difference if the three copies are widely separated in space or time—say, on different planets, or in different centuries? Is it possible that the massive redundancy taking place in your brain right now is bringing multiple copies of you into being?

 

 

Maybe my favorite thought experiment along these lines was invented by my former student Andy Drucker.  In the past five years, there’s been a revolution in theoretical cryptography, around something called Fully Homomorphic Encryption (FHE), which was first discovered by Craig Gentry.  What FHE lets you do is to perform arbitrary computations on encrypted data, without ever decrypting the data at any point.  So, to someone with the decryption key, you could be proving theorems, simulating planetary motions, etc.  But to someone without the key, it looks for all the world like you’re just shuffling random strings and producing other random strings as output.

 

You can probably see where this is going.  What if we homomorphically encrypted a simulation of your brain?  And what if we hid the only copy of the decryption key, let’s say in another galaxy?  Would this computation—which looks to anyone in our galaxy like a reshuffling of gobbledygook—be silently producing your consciousness?

 

When we consider the possibility of a conscious quantum computer, in some sense we inherit all the previous puzzles about conscious classical computers, but then also add a few new ones.  So, let’s say I run a quantum subroutine that simulates your brain, by applying some unitary transformation U.  But then, of course, I want to “uncompute” to get rid of garbage (and thereby enable interference between different branches), so I apply U-1.  Question: when I apply U-1, does your simulated brain experience the same thoughts and feelings a second time?  Is the second experience “the same as” the first, or does it differ somehow, by virtue of being reversed in time? Or, since U-1U is just a convoluted implementation of the identity function, are there no experiences at all here?

 

Here’s a better one: many of you have heard of the Vaidman bomb.  This is a famous thought experiment in quantum mechanics where there’s a package, and we’d like to “query” it to find out whether it contains a bomb—but if we query it and there is a bomb, it will explode, killing everyone in the room.  What’s the solution?  Well, suppose we could go into a superposition of querying the bomb and not querying it, with only ε amplitude on querying the bomb, and √(1-ε2) amplitude on not querying it.  And suppose we repeat this over and over—each time, moving ε amplitude onto the “query the bomb” state if there’s no bomb there, but moving ε2 probability onto the “query the bomb” state if there is a bomb (since the explosion decoheres the superposition).  Then after 1/ε repetitions, we’ll have order 1 probability of being in the “query the bomb” state if there’s no bomb.  By contrast, if there is a bomb, then the total probability we’ve ever entered that state is (1/ε)×ε2 = ε.  So, either way, we learn whether there’s a bomb, and the probability that we set the bomb off can be made arbitrarily small.  (Incidentally, this is extremely closely related to how Grover’s algorithm works.)

 

OK, now how about the Vaidman brain?  We’ve got a quantum subroutine simulating your brain, and we want to ask it a yes-or-no question.  We do so by querying that subroutine with ε amplitude 1/ε times, in such a way that if your answer is “yes,” then we’ve only ever activated the subroutine with total probability ε.  Yet you still manage to communicate your “yes” answer to the outside world.  So, should we say that you were conscious only in the ε fraction of the wavefunction where the simulation happened, or that the entire system was conscious?  (The answer could matter a lot for anthropic purposes.)

To sum up: Brian’s notion that consciousness is the same as computation raises more issues than it solves; in particular, the possibility that if suffering is computable, it may also be uncomputable/reversible, would suggest s-risks aren’t as serious as FRI treats them.

Objection 8: Dangerous combination

Three themes which seem to permeate FRI’s research are:

(1) Suffering is the thing that is bad.

(2) It’s critically important to eliminate badness from the universe.

(3) Suffering is impossible to define objectively, and so we each must define what suffering means for ourselves.

Taken individually, each of these seems reasonable. Pick two, and you’re still okay. Pick all three, though, and you get A Fully General Justification For Anything, based on what is ultimately a subjective/aesthetic call.

Much can be said in FRI’s defense here, and it’s unfair to single them out as risky: in my experience they’ve always brought a very thoughtful, measured, cooperative approach to the table. I would just note that ideas are powerful, and I think theme (3) is especially pernicious if incorrect.

III. QRI’s alternative

Analytic functionalism is essentially a negative hypothesis about consciousness: it’s the argument that there’s no order to be found, no rigor to be had. It obscures this with talk of “function”, which is a red herring it not only doesn’t define, but admits is undefinable. It doesn’t make any positive assertion. Functionalism is skepticism- nothing more, nothing less.

But is it right?

Ultimately, I think these a priori arguments are much like people in the middle ages arguing whether one could ever formalize a Proper System of Alchemy. Such arguments may in many cases hold water, but it’s often difficult to tell good arguments apart from arguments where we’re just cleverly fooling ourselves. In retrospect, the best way to *prove* systematized alchemy was possible was to just go out and *do* it, and invent Chemistry. That’s how I see what we’re doing at QRI with Qualia Formalism: we’re assuming it’s possible to build stuff, and we’re working on building the object-level stuff.

What we’ve built with QRI’s framework

Note: this is a brief, surface-level tour of our research; it will probably be confusing for readers who haven’t dug into our stuff before. Consider this a down-payment on a more substantial introduction.

My most notable work is Principia Qualia, in which I lay out my meta-framework for consciousness (a flavor of dual-aspect monism, with a focus on Qualia Formalism) and put forth the Symmetry Theory of Valence (STV). Essentially, the STV is an argument that much of the apparent complexity of emotional valence is evolutionarily contingent, and if we consider a mathematical object isomorphic to a phenomenological experience, the mathematical property which corresponds to how pleasant it is to be that experience is the object’s symmetry. This implies a bunch of testable predictions and reinterpretations of things like what ‘pleasure centers’ do (Section XI; Section XII). Building on this, I offer the Symmetry Theory of Homeostatic Regulation, which suggests understanding the structure of qualia will translate into knowledge about the structure of human intelligence, and I briefly touch on the idea of Neuroacoustics.

Likewise, my colleague Andrés Gómez Emilsson has written about the likely mathematics of phenomenology, including The Hyperbolic Geometry of DMT Experiences, Tyranny of the Intentional Object, and Algorithmic Reduction of Psychedelic States. If I had to suggest one thing to read in all of these links, though, it would be the transcript of his recent talk on Quantifying Bliss, which lays out the world’s first method to objectively measure valence from first principles (via fMRI) using Selen Atasoy’s Connectome Harmonics framework, the Symmetry Theory of Valence, and Andrés’s CDNS model of experience.

These are risky predictions and we don’t yet know if they’re right, but we’re confident that if there is some elegant structure intrinsic to consciousness, as there is in many other parts of the natural world, these are the right kind of risks to take.

I mention all this because I think analytic functionalism- which is to say radical skepticism/eliminativism, the metaphysics of last resort- only looks as good as it does because nobody’s been building out any alternatives.

IV. Closing thoughts

FRI is pursuing a certain research agenda, and QRI is pursuing another, and there’s lots of value in independent explorations of the nature of suffering. I’m glad FRI exists, everybody I’ve interacted with at FRI has been great, I’m happy they’re focusing on s-risks, and I look forward to seeing what they produce in the future.

On the other hand, I worry that nobody’s pushing back on FRI’s metaphysics, which seem to unavoidably lead to the intractable problems I describe above. FRI seems to believe these problems are part of the territory, unavoidable messes that we just have to make philosophical peace with. But I think that functionalism is a bad map, that the metaphysical messes it leads to are much worse than most people realize (fatal to FRI’s mission), and there are other options that avoid these problems (which, to be fair, is not to say they have no problems).

Ultimately, FRI doesn’t owe me a defense of their position. But if they’re open to suggestions on what it would take to convince a skeptic like me that their brand of functionalism is viable, or at least rescuable, I’d offer the following:

Re: Objection 1 (motte-and-bailey), I suggest FRI should be as clear and complete as possible in their basic definition of suffering. In which particular ways is it ineffable/fuzzy, and in which particular ways is it precise? What can we definitely say about suffering, and what can we definitely never determine? Preregistering ontological commitments and methodological possibilities would help guard against FRI’s definition of suffering changing based on context.

Re: Objection 2 (intuition duels), FRI may want to internally “war game” various future scenarios involving AGI, WBE, etc, with one side arguing that a given synthetic (or even extraterrestrial) organism is suffering, and the other side arguing that it isn’t. I’d expect this would help diagnose what sorts of disagreements future theories of suffering will need to adjudicate, and perhaps illuminate implicit ethical intuitions. Sharing the results of these simulated disagreements would also be helpful in making FRI’s reasoning less opaque to outsiders, although making everything transparent could lead to certain strategic disadvantages.

Re: Objection 3 (convergence requires common truth), I’d like FRI to explore exactly what might drive consilience/convergence in theories of suffering, and what precisely makes one theory of suffering better than another, and ideally to evaluate a range of example theories of suffering under these criteria.

Re: Objection 4 (assuming that consciousness is a reification produces more confusion, not less), I would love to see a historical treatment of reification: lists of reifications which were later dissolved (e.g., élan vital), vs scattered phenomena that were later unified (e.g., electromagnetism). What patterns do the former have, vs the latter, and why might consciousness fit one of these buckets better than the other?

Re: Objection 5 (the Hard Problem of Consciousness is a red herring), I’d like to see a more detailed treatment of what kinds of problem people have interpreted the Hard Problem as, and also more analysis on the prospects of Qualia Formalism (which I think is the maximally-empirical, maximally-charitable interpretation of the Hard Problem). It would be helpful for us, in particular, if FRI preregistered their expectations about QRI’s predictions, and their view of the relative evidence strength of each of our predictions.

Re: Objection 6 (mapping to reality), this is perhaps the heart of most of our disagreement. From Brian’s quotes, he seems split on this issue; I’d like clarification about whether he believes we can ever precisely/objectively map specific computations to specific physical systems, and vice-versa. And if so— how? If not, this seems to propagate through FRI’s ethical framework in a disastrous way, since anyone can argue that any physical system does, or does not, ‘code’ for massive suffering, and there’s no principled way to derive any ‘ground truth’ or even pick between interpretations in a principled way (e.g. my popcorn example). If this isn’t the case— why not?

Brian has suggested that “certain high-level interpretations of physical systems are more ‘natural’ and useful than others” (personal communication); I agree, and would encourage FRI to explore systematizing this.

It would be non-trivial to port FRI’s theories and computational intuitions to the framework of “hypercomputation”– i.e., the understanding that there’s a formal hierarchy of computational systems, and that Turing machines are only one level of many– but it may have benefits too. Namely, it might be the only way they could avoid Objection 6 (which I think is a fatal objection) while still allowing them to speak about computation & consciousness in the same breath. I think FRI should look at this and see if it makes sense to them.

Re: Objection 7 (FRI doesn’t fully bite the bullet on computationalism), I’d like to see responses to Aaronson’s aforementioned thought experiments.

Re: Objection 8 (dangerous combination), I’d like to see a clarification about why my interpretation is unreasonable (as it very well may be!).

 


In conclusion- I think FRI has a critically important goal- reduction of suffering & s-risk. However, I also think FRI has painted itself into a corner by explicitly disallowing a clear, disagreement-mediating definition for what these things are. I look forward to further work in this field.

 

Mike Johnson

Qualia Research Institute


Acknowledgements: thanks to Andrés Gómez Emilsson, Brian Tomasik, and Max Daniel for reviewing earlier drafts of this.

Sources:

My sources for FRI’s views on consciousness:
Flavors of Computation are Flavors of Consciousness:
https://foundational-research.org/flavors-of-computation-are-flavors-of-consciousness/
Is There a Hard Problem of Consciousness?
http://reducing-suffering.org/hard-problem-consciousness/
Consciousness Is a Process, Not a Moment
http://reducing-suffering.org/consciousness-is-a-process-not-a-moment/
How to Interpret a Physical System as a Mind
http://reducing-suffering.org/interpret-physical-system-mind/
Dissolving Confusion about Consciousness
http://reducing-suffering.org/dissolving-confusion-about-consciousness/
Debate between Brian & Mike on consciousness:
https://www.facebook.com/groups/effective.altruists/permalink/1333798200009867/?comment_id=1333823816673972&comment_tracking=%7B%22tn%22%3A%22R9%22%7D
Max Daniel’s EA Global Boston 2017 talk on s-risks:
https://www.youtube.com/watch?v=jiZxEJcFExc
Multipolar debate between Eliezer Yudkowsky and various rationalists about animal suffering:
https://rationalconspiracy.com/2015/12/16/a-debate-on-animal-consciousness/
The Internet Encyclopedia of Philosophy on functionalism:
http://www.iep.utm.edu/functism/
Gordon McCabe on why computation doesn’t map to physics:
http://philsci-archive.pitt.edu/1891/1/UniverseCreationComputer.pdf
Toby Ord on hypercomputation, and how it differs from Turing’s work:
https://arxiv.org/abs/math/0209332
Luke Muehlhauser’s OpenPhil-funded report on consciousness and moral patienthood:
http://www.openphilanthropy.org/2017-report-consciousness-and-moral-patienthood
Scott Aaronson’s thought experiments on computationalism:
http://www.scottaaronson.com/blog/?p=1951
Selen Atasoy on Connectome Harmonics, a new way to understand brain activity:
https://www.nature.com/articles/ncomms10340
My work on formalizing phenomenology:
My meta-framework for consciousness, including the Symmetry Theory of Valence:
http://opentheory.net/PrincipiaQualia.pdf
My hypothesis of homeostatic regulation, which touches on why we seek out pleasure:
http://opentheory.net/2017/05/why-we-seek-out-pleasure-the-symmetry-theory-of-homeostatic-regulation/
My exploration & parametrization of the ‘neuroacoustics’ metaphor suggested by Atasoy’s work:
http://opentheory.net/2017/06/taking-brain-waves-seriously-neuroacoustics/
My colleague Andrés’s work on formalizing phenomenology:
A model of DMT-trip-as-hyperbolic-experience:
https://qualiacomputing.com/2017/05/28/eli5-the-hyperbolic-geometry-of-dmt-experiences/
June 2017 talk at Consciousness Hacking, describing a theory and experiment to predict people’s valence from fMRI data:
https://qualiacomputing.com/2017/06/18/quantifying-bliss-talk-summary/
A parametrization of various psychedelic states as operators in qualia space:
https://qualiacomputing.com/2016/06/20/algorithmic-reduction-of-psychedelic-states/
A brief post on valence and the fundamental attribution error:
https://qualiacomputing.com/2016/11/19/the-tyranny-of-the-intentional-object/
A summary of some of Selen Atasoy’s current work on Connectome Harmonics:
https://qualiacomputing.com/2017/06/18/connectome-specific-harmonic-waves-on-lsd/