David Pearce on the Long-Term Future of Consciousness: The Meta-Copernican Revolution

Excerpt from David Pearce‘s 2008 Diary Update (images made w/ DALL-E, except for the pictures of Shulgin):


New discoveries? Nothing dramatic. I dutifully flip through Nature each week; wade through turgid tomes of analytic philosophy; and scan Medline abstracts. A lot of the time my heart isn’t in it. Compared to an item from Dr Shulgin‘s library, the illumination can seem trivial. I very much doubt if people who have tried major psychedelics are any smarter on average than the drug-naïve; in fact psychonauts may be cognitively overwhelmed or (rarely) even brain-damaged by their experiences. To complicate comparisons further, many altered states are dross – just like innumerable textures of everyday life. But by opening up a Pandora’s box of new phenomena, psychedelics do confer an immensely richer evidential base for any theory of mind and the world – an evidential base too rich, indeed, for our existing primitive terms, language and conceptual equipment to handle. One compares the laments of physicists starved of new empirical data to test their theories beyond the low-energy Standard Model with the fate of the psychedelic investigator. For in contrast the aspiring psychonaut may be forced to abandon the empirical method, not because he exhausts the range of novel phenomenology it delivers, but because the Darwinian mind can neither cope with LSD / ketamine / salvia / DMT‘s (etc) weirdness, nor weave the novel modes of sentience disclosed into an integrated world-picture.

Alexander Shulgin in his lab. #1

Of course, claims of epochal significance cut no ice with the drug-naïve. Those innocent of drug-induced exotica see no more need to enhance their evidential base than did the cardinals (apocryphally) invited to look through Galileo‘s telescope. An a priori refusal to acknowledge the potential significance of alien modes of sentience is impossible to overcome in subjects whose experience of altered states is confined to getting drunk. Over time, even my own knowledge of these bizarre realms is fading. My ancestral namesake was briefly awoken from his dogmatic slumbers; but DP version-2008 has rejoined the ranks of the living dead in the ghetto of consensus reality.

Alexander Shulgin in his lab. #2

My assimilation isn’t yet complete. Even as a born-again sleepwalker, I sometimes wonder if there may be a first-person method alternative to drug-based investigations that can unlock novel phenomenology latent within excitable nervous tissue. There is a crying need for alternative avenues, I think, since drug-driven self-assays are for the most part not merely unlawful and taboo, but arguably can’t be practised responsibly until the substrates of well-being are guaranteed in a (hypothetical) post-Darwinian era of genetically pre-programmed bliss. I’ve thought about alternatives to using psychoactive drugs, not least because of the shallowness of my own current research compared to the richness of the empirical methodology pioneered by Dr Shulgin. In order to discover both the formal, mathematico-physical and the intrinsic, subjective properties of the world, a dual methodology of third- and first-person research is indispensable. The former can be abdicated to the physical sciences; but not the latter. Natural science offers no explanation of why we’re not zombies, an unfortunate anomaly if consciousness is fundamental both to our understanding of the world and the world itself. By forswearing the empirical method, we effectively guarantee that the mysteries of consciousness will never be solved. Whereas insentience is, so to speak, all of a piece – hypothetical “zombies” in the philosophical sense of the term are all exactly alike in being non-conscious – there are innumerable ways to be sentient: qualia are fantastically diverse in ways we’ve scarcely begun to map out. So I reckon the only way adequately to understand Reality will be both to capture its formal structure – ideally the master equation of the TOE of the Multiverse – and literally to incorporate ever more of the stuff of the world into one’s expanding psyche to explore the state-space of its textures – the “what-it’s-likeness”. Only incorporation and systematic molecular permutation can disclose the subjective features of all permutations of matter and energy: the solutions, I conjecture, to the equations of the TOE. A priori, one could never have guessed that cells of the striate cortex mediate visual experience and cells in the posterior parietal cortex mediate auditory experience, quite irrespective of their typical functional role in the sensory systems of naturally evolved organisms. We know about such phenomena – and full-blown phenomenal sunsets and symphonies – only because we instantiate the neuronal cell-assembles that embody such qualia. Thus to discover novel categories of experience, I think we should construct and personally instantiate genetically enhanced designer brain cells, systematically altering their intracellular amino acid sequences and gene expression profiles to design/discover new categories of experience as different as is sight from sound, making them part of one’s own psyche/virtual world. Or if this incorporation sounds too irreversible, perhaps we might splice in designer genes and allelic combinations for new modes of experience into subsets of our existing nerve cells, systematically coding new protein sequences into discrete areas of the brain and then selectively expressing the designer proteins they code for at will. Eventually, however, systematic manipulation of the molecular ingredients of one’s neural porridge/mind-dust can be harnessed to mind-expansion in the literal sense. This is because we need bigger mind/brains, not just to mirror external reality more effectively, but also to discover more of its subjective properties. Such discoveries can only be accomplished empirically.

New neuron types for new neurotypes.

I suppose what drives me here is reflection on just how (superficially) trivial are the neurochemical differences between nerve cells mediating, say, phenomenal colour and phenomenal sound – and indeed reflection on how (superficially) trivial are the molecular differences in the cells mediating the phenomenology of desire, volition and belief-episodes. How can such tiny molecular differences exert such dramatic subjective effects? LSD, for instance, is undetectable in the body three hours after consumption; and yet a few hundred micrograms of the serotonin 5-HT2A partial agonist can transport the subject into outlandish alternative virtual worlds for 10 hours or more. How many analogous, radically incommensurable kingdoms of experience, mediated by equally “trivial” molecular variations, await discovery? How will the uncharted state-spaces be systematically explored? What will be the nature of life/civilisation when these kingdoms of experience are spliced together in composite minds; recruited to play an information-bearing role; harnessed to new art forms and new lifestyles; and ultimately integrated into communities of composite minds in advanced civilisations? For sure, talk of discovering a “new category of experience” doesn’t sound a particularly exciting kind of knowledge when couched in the abstract, any more than discovery of a new brand of perfume. OK, it’s a new experience; but so what? [Andrés adds: so what!?] One might sacrifice a lot for the opportunity to experience a novel phenomenal colour; but what cognitive value should be ascribed to an unknown category of experience for which one hasn’t even a name? Initially at any rate, the novel modes of experience that we discover within a modified neural proteome won’t be harnessed to senses, either internal or external, let alone harnessed to whole conceptual schemes, cultures and novel languages of thought. So they won’t play any functional role in the mind/brain: they won’t be information-bearing. But then neither are visual or auditory experiences per se; they have no intrinsic connection to sensory perception. Dreams, for instance, can be vibrantly colourful; they don’t reliably track anything in the external world. Honed by natural selection after recruitment by awake living organisms to track mind-independent patterns, visual and auditory experience has taken millions of years to play out; and who knows where it will end. By the same token, the developmental potential of new modes of experience that we discover in tweaked neurons is equally unfathomable from here.

Every scent, every color, every touch sensation, every sound, every novel qualia…

I can understand the impatience of an exasperated sceptic. What interest have novel “tickles” of experience beyond the psychopathology of the subject? Analogously, conventional wisdom in an echolocation (etc)-based civilisation might scornfully ask a similar question if and when post-chiropteran psychonauts first access drug-induced speckles of colour or jarring shrieks or whistles of sound – or perhaps when investigators recklessly explore a new methodology of mind-expansion by incorporating alien nervous tissue into their psyche. The chiropteran consensus wisdom might account the new phenomena weird but trivial – and inexpressible in language to boot. So why should any sane chiropteran mind run the risk of messing itself up just to explore such psychotic states? For our part, human ignorance of what it’s like to be a bat isn’t too unsettling because we know that bats don’t have a rich conceptual scheme, culture or technology. We are “superior” to bats; and therefore their alien modes of experience aren’t especially important. We don’t even give our ignorance much thought.

What is it like to be a bat? An empirical neural tissue insertion protocol to explore nature’s very own echolocation qualia from the comfort of your own home…

But latent in matter and energy – and flourishing in other branches of the universal wavefunction – are presumably superintellects and supercivilisations in other Everett branches whose conceptual schemes are rooted in modes of experience no less real than our own. I suspect that accessing the subjective lifeworlds of hitherto alien mind/brains will inaugurate a meta-Copernican Revolution to dwarf anything that’s come before. The textures of such alien minds are as much a natural property of matter and energy as the atomic mass of gold; and no less important to understanding the nature of the world. Needless to say, grandiose claims of new paradigms, meta-Copernican revolutions, etc, should usually be taken with a healthy grain of salt. I am loath to write such expressions, not least because I can imagine both the withering scorn of my hyper-rational but drug-naïve teenage namesake, and likewise the dismissive reaction of my drug-naïve contemporaries today. Such are the perils of a priori philosophizing practised by academic philosophers (and soi-disant scientists) unwilling to get their hands (or their minds) dirty with the empirical method. In each case, our ignorance of the intrinsic, subjective nature of configurations of most of the stuff of the world is fundamental. It’s an ignorance not remediable by simple application of the hypothetico-deductive method, falsificationismBayesianism or the usual methodologies of third-person science. If you want to find out what it’s like to be a bat, then you have to experience the phenomenology of echolocation. Knowledge-acquisition entails a hardware upgrade. A notional IQ of 200 won’t help without the neural wetware to go with it – any more than a congenitally deaf supergenius can hear music by virtuoso feats of reasoning alone.

But latent in matter and energy – and flourishing in other branches of the universal wavefunction – are presumably superintellects and supercivilisations in other Everett branches whose conceptual schemes are rooted in modes of experience no less real than our own.

I guess one deterrent to investigation of altered and exotic states is the thought that the novel phenomena disclosed “aren’t Real” – as though the reality of any phenomenon depended on it being a copy or representation of something else external to itself. I wonder if I lived in a world of Mary-like superscientists – smart monochromats who see the world in black and white – whether I would dare put on “psychedelic” spectacles and hallucinate phenomenal colour? And could I communicate to my Mary-like superscientist colleagues the significance of what they were missing without sounding like a drug-deranged crank? Probably not.

Literally Expanding Our Mind To Overcome Our Fundamental Ignorance of Alien Modes of Experience

So I reckon that we should, literally, expand our minds. If we do, how far should incorporation go? The size of the human brain is limited by the human birth-canal, a constraint that technologies of extra-uterine pregnancy from conception to term will presumably shortly overcome. Over time, brains can become superbrains; and sentience can become supersentience. Ultimately, should we aspire to become God or merely gods? My (tentative) inclination is that we should all become One [Andrés adds: see David’s Quora response on the topic of Open Individualism]; and not merely out of deference to my New Age friends. Separateness from each other is an epistemic, not just an ethical, limitation: a source of profound ignorance. For we fundamentally misconstrue the nature of other sentient beings, misunderstanding each other as objects to which we fitfully attribute feelings rather than as pure subjects. [Actually, the story is more complicated. If inferential realism about perception is true, then the sceptic about Other Minds is right, in a sense: the phenomenal people encountered in one’s egocentric world-simulation are zombies. But when one is awake, the zombies serve as avatars that causally covary with sentient beings in one’s local environment. So the point stands.] Yes, literally fusing with other minds/virtual worlds sounds an unattractive (as well as infeasible) prospect for the foreseeable future; and not just because of their lousy organic avatars. For we certainly wouldn’t want to Become One with a bunch of ugly Darwinian minds; and likewise, they might get a nasty shock if they tasted one’s own. Infatuated lovers may want to fuse; rival alpha males certainly don’t [unless one eats a defeated opponent, a form of intimacy practised in some traditional cultures; but this is a very one-sided consummation of a relationship]. However, perhaps the prospect of unification will be more exciting if and when we become posthuman smart angels, so to speak: beautiful in every sense. I have no hidden agenda beyond my abolitionist propagandizing; but on current evidence it’s likely we belong to a family of Everett branches that will lead to god-like beings. And thence to God? I’m sceptical, but I don’t know.

Mindmelding with other Darwinian creatures is kind of a bummer sometimes.

Divinity takes many forms. What kind of (demi)gods might we become? Superhappy beings, I reckon, yes, but superhappiness in what guise? A unitary Über-Mind, or fragmented minds as now? At one extreme of the continuum, posthumans may opt to live solipsistically in designer paradises: an era not just of personalized medicine but personalized VR. [Would I opt to dwell with a harem of several thousand houris and become Emperor Dave the First, Lord of The Universe? And supremely modest too. Yes, probably. I’m a Darwinian male.] Occupying the middle of the continuum is the superconnectivity of web-enabled minds (via neural implants, etc) without unitary experience or loss of personal identity. Such a scenario is a recognizable descendant of the status quo whereby we are all connected via the Net to everyone else. This sort of future is the most “obvious” since it’s an extrapolation of current trends. Extreme interconnectivity is still consistent with extensive ignorance of each other, although expansion and/or functional amplification of our mirror neurons could magnify our capacity for mutual empathetic understanding. Finally, at the other extreme of the continuum, there is presumably a more-or-less complete fusion of posthuman mind/brains into a unitary collective: a blissful analogue of the Borg, but contiguous rather than scattered: there is no evidence spatio-temporally disconnected beings have token-identical experiences. It’s hard enough to solve the binding problem in one mind/brain, let alone across discrete skulls.

Emperor Dave the First, Psychonaut Lord of The Universe, Bliss For All Creatures Under the Sun

I don’t know which if any of these three families of scenario is the most likely culmination of life in the Multiverse. Indeed it’s unclear whether the third scenario, i.e. a unitary experiential Supermind, is even technically feasible. For there is an upper limit to the size and duration of the conjectural “warm” quantum coherence needed for unitary sentience; it’s difficult enough to avoid ultra-rapid thermally-induced decoherence in even a single human mind/brain, let alone a hypothetical global super-mind/brain. Is there a way round this constraint? In spite of the well-worn dictum “black holes have no hair“, I used to play around with the idea that blissful superminds lived on the ultra-cool “surface” of supermassive black holes. All the information content of their interior and information content at the horizon is smeared out across the entire horizon, allowing unitary megaminds of maximum information density – and maximum intelligent bliss: what Seth Baum aptly calls “utilitronium”. This conjecture needs more work. But whether conscious mind is unitary or discrete, I suspect that posthuman modes of existence will be based, not on today’s ordinary waking consciousness, but on unimaginably different modes of sentience. In addition, I predict that these modes of sentience will be as different in intensity from ours as is a supernova from a glowworm. Thus any speculative story we may now be tempted to tell about what life may be like millions or billions of years hence will of necessity ignore a fundamental difference between future minds and us. Human futurology omits the key evolutionary transitions ahead in the nature of consciousness – not only the ethically all-important hedonic transition to superhappiness that I stress, but other modes of sentience currently unknown. The discontinuity promised by any future technological Singularity – or soft Singularities – derives not merely from an exponential growth of computer processing power, but from inconceivably different textures of sentience. Actually, I entertain many bizarre ideas. The art is taking them seriously enough to explore their implications and testable predictions, but sceptically enough not to be seduced into believing they are likely to be true. And what about the nearest I come to a dogmatic commitment? Could the abolitionist project turn out to be mistaken too? I guess so. Yet at least the abolition of suffering is not a phenomenon we will live to regret.

Three families of scenarios for the culmination of life in the Multiverse: #1 everyone kinda doing their own thing in their little virtual worlds. #2 hybrid hive minds of hypersocial connected individuals who choose to retain their (porous) individuality. #3 God, a single mega-mind, that maximally bounds as much matter and energy into unitary superexperiences.


See Also:

Posthuman Art: Towards Full-Spectrum Positive Valence Amplification

Everyone says love hurts, but that is not true. Loneliness hurts. Rejection hurts. Losing someone hurts. Envy hurts. Everyone gets these things confused with love, but in reality love is the only thing in this world that covers up all pain and makes someone feel wonderful again. Love is the only thing in this world that does not hurt.

 

― Meša Selimović


Excerpt from the wonderful conversation between Lucas Perry, Sam Barker, and David Pearce posted on June 24 (2020) at the Future of Life Institute Podcast (where Mike Johnson and I have previously participated). [Emphasis mine].


Lucas Perry: For this first section, I’m basically interested in probing the releases that you already have done, Sam, and exploring them and your inspiration for the track titles and the soundscapes that you’ve produced. Some of the background and context for this is that much of this seems to be inspired by and related to David’s work, in particular the Hedonistic Imperative. I’m at first curious to know, Sam, how did you encounter David’s work, and what does it mean for you?

Sam Barker: David’s work was sort of arriving in the middle of a series of realizations, and kind of coming from a starting point of being quite disillusioned with music, and a little bit disenchanted with the vagueness, and the terminology, and the imprecision of the whole thing. I think part of me has always wanted to be some kind of scientist, but I’ve ended up at perhaps not the opposite end, but quite far away from it.

Lucas Perry: Could explain what you mean by vagueness and imprecision?

Sam Barker: I suppose the classical idea of what making music is about has a lot to do with the sort of western idea of individualism and about self-expression. I don’t know. There’s this romantic idea of artists having these frenzied creative bursts that give birth to the wonderful things, that it’s some kind of struggle. I just was feeling super disillusioned with all of that. Around that time, 2014 or 15, I was also reading a lot about social media, reading about behavioral science, trying to figure what was going on in this arena and how people are being pushed in different directions by this algorithmic system of information distribution. That kind of got me into this sort of behavioral science side of things, like the addictive part of the variable-ratio reward schedule with likes. It’s a free dopamine dispenser kind of thing. This was kind of getting me into reading about behavioral science and cognitive science. It was giving me a lot of clarity, but not much more sort of inspiration. It was basically like music.

Dance music especially is a sort of complex behavioral science. You do this and people do that. It’s all deeply ingrained. I sort of imagine the DJ as a sort Skinner box operator pulling puppet strings and making people behave in different ways. Music producers are kind of designing clever programs using punishment and reward, or suspense and release, and controlling people’s behavior. The whole thing felt super pushy and not a very inspiring conclusion. Looking at the problem from a cognitive science point of view is just the framework that helped me to understand what the problem was in the first place, so this kind of problem of being manipulative. Behavioral science is kind of saying what we can make people do. Cognitive psychology is sort of figuring out why people do that. That was my entry point into cognitive psychology, and that was kind of the basis for Debiasing.

There’s always been sort of a parallel for me between what I make and my state of mind. When I’m in a more positive state, I tend to make things I’m happier with, and so on. Getting to the bottom of what tricks were, I suppose, with dance music. I kind of understood implicitly, but I just wanted to figure out why things worked. I sort of came to the conclusion it was to do with a collection of biases we have, like the confirmation bias, and the illusion of truth effect, and the mere exposure effect. These things are like the guardians of four/four supremacy. Dance music can be pretty repetitive, and we describe it sometimes in really aggressive terminology. It’s a psychological kind of interaction.

Cognitive psychology was leading me to Kaplan’s law of the instrument. The law of the instrument says that if you give a small boy a hammer, he’ll find that everything he encounters requires pounding. I thought that was a good metaphor. The idea is that we get so used to using tools in a certain way that we lose sight of what it is we’re trying to do. We act in the way that the tool instructs us to do. I thought, what if you take away the hammer? That became a metaphor for me, in a sense, that David clarified in terms of pain reduction. We sort of put these painful elements into music in a way to give this kind of hedonic contrast, but we don’t really consider that that might not be necessary. What happens when we abolish these sort of negative elements? Are the results somehow released from this process? That was sort of the point, up until discovering the Hedonistic Imperative.

I think what I was needing at the time was a sort of framework, so I had the idea that music was decision making. To improve the results, you have to ask better questions, make better decisions. You can make some progress looking at the mechanics of that from a psychology point of view. What I was sort of lacking was a purpose to frame my decisions around. I sort of had the idea that music was a sort of a valence carrier, if you like, and that it could be tooled towards a sort of a greater purpose than just making people dance, which was for Debiasing the goal, really. It was to make people dance, but don’t use the sort of deeply ingrained cues that people used to, and see if that works.

What was interesting was how broadly it was accepted, this first EP. There were all kinds of DJs playing it in techno, ambient, electro, all sorts of different styles. It reached a lot of people. It was as if taking out the most functional element made it more functional and more broadly appealing. That was the entry point to utilitarianism. There was sort of an accidentally utilitarian act, in a way, to sort of try and maximize the pleasure and minimize the pain. I suppose after landing in utilitarianism and searching for some kind of a framework for a sense of purpose in my work, the Hedonistic Imperative was probably the most radical, optimistic take on the system. Firstly, it put me in a sort of mindset where it granted permission to explore sort of utopian ideals, because I think the idea of pleasure is a little bit frowned upon in the art world. I think the art world turns its nose up at such direct cause and effect. The idea that producers could be paradise engineers of sorts, or the precursors to paradise engineers, that we almost certainly would have a role in a kind of sensory utopia of the future.

There was this kind of permission granted. You can be optimistic. You can enter into your work with good intentions. It’s okay to see music as a tool to increase overall wellbeing, in a way. That was kind of the guiding idea for my work in the studio. I’m trying, these days, to put more things into the system to make decisions in a more conscious way, at least where it’s appropriate to. This sort of notion of reducing pain and increasing pleasure was the sort of question I would ask at any stage of decision making. Did this thing that I did serve those ends? If not, take a step back and try a different approach.

There’s something else to be said about the way you sort of explore this utopian world without really being bogged down. You handle the objections in such a confident way. I called it a zero gravity world of ideas. I wanted to bring that zero gravity feeling to my work, and to see that technology can solve any problem in this sphere. Anything’s possible. All the obstacles are just imagined, because we fabricate these worlds ourselves. These are things that were really instructive for me, as an artist.

Lucas Perry: That’s quite an interesting journey. From the lens of understanding cognitive psychology and human biases, was it that you were seeing those biases in dance music itself? If so, what were those biases in particular?

Sam Barker: On both sides, on the way it’s produced and in the way it’s received. There’s sort of an unspoken acceptance. You’re playing a set and you take a kick drum out. That signals to people to perhaps be alert. The lighting engineer, they’ll maybe raise the lights a little bit, and everybody knows that the music is going into sort of a breakdown, which is going to end in some sort of climax. Then, at that point, the kick drum comes back in. We all know this pattern. It’s really difficult to understand why that works without referring to things like cognitive psychology or behavioral science.

Lucas Perry: What does the act of debiasing the reception and production of music look like and do to the music and its reception?

Sam Barker: The first part that I could control was what I put into it. The experiment was whether a debiased piece of dance music could perform the same functionality, or whether it really relies on these deeply ingrained cues. Without wanting to sort of pat myself on the back, it kind of succeeded in its purpose. It was sort of proof that this was a worthy concept.

Lucas Perry: You used the phrase, earlier, four/four. For people who are not into dance music, that just means a kick on each beat, which is ubiquitous in much of house and techno music. You’ve removed that, for example, in your album Debiasing. What are other things that you changed from your end, in the production of Debiasing, to debias the music from normal dance music structure?

Sam Barker: It was informing the structure of what I was doing so much that I wasn’t so much on a grid where you have predictable things happening. It’s a very highly formulaic and structured thing, and that all keys into the expectation and this confirmation bias that people, I think, get some kind of kick from when the predictable happens. They say, yep. There you go. I knew that was going to happen. That’s a little dopamine rush, but I think it’s sort of a cheap trick. I guess I was trying to get the tricks out of it, in a way, so figuring out what they were, and trying to reduce or eliminate them was the process for Debiasing.

Lucas Perry: That’s quite interesting and meaningful, I think. Let’s just take trap music. I know exactly how trap music is going to go. It has this buildup and drop structure. It’s basically universal across all dance music. Progressive house in the 2010s was also exactly like this. What else? Dubstep, of course, same exact structure. Everything is totally predictable. I feel like I know exactly what’s going to happen, having listened to electronic music for over a decade.

Sam Barker: It works, I think. It’s a tried and tested formula, and it does the job, but when you’re trying to imagine states beyond just getting a little kick from knowing what was going to happen, that’s the place that I was trying to get to, really.

Lucas Perry: After the release of Debiasing in 2018, which was a successful attempt at serving this goal and mission, you then discovered the Hedonistic Imperative by David Pearce, and kind of leaned into consequentialism, it seems. Then, in 2019, you had two releases. You had BARKER 001 and you had Utility. Now, Utility is the album which most explicitly adopts David Pearce’s work, specifically in the Hedonistic Imperative. You mentioned electronic dance producers and artists in general can be sort of the first wave of, or can perhaps assist in paradise engineering, insofar as that will be possible in the near to short terms future, given advancements in technology. Is that sort of the explicit motivation and framing around those two releases of BARKER 001 and Utility?

Sam Barker: BARKER 001 was a few tracks that were taken out of the running for the album, because they didn’t sort of fit the concept. Really, I knew the last track was kind of alluding to the album. Otherwise, it was perhaps not sort of thematically linked. Hopefully, if people are interested in looking more into what’s behind the music, you can lead people into topics with the concept. With Utility, I didn’t want to just keep exploring cognitive biases and unpicking dance music structurally. It’s sort of a paradox, because I guess the Hedonistic Imperative argues that pleasure can exist without purpose, but I really was striving for some kind of purpose with the pleasure that I was getting from music. That sort of emerged from reading the Hedonistic Imperative, really, that you can apply music to this problem of raising the general level of happiness up a notch. I did sort of worry that by trying to please, it wouldn’t work, that it would be something that’s too sickly sweet. I mean, I’m pretty turned off by pop music, and there was this sort of risk that it would end up somewhere like that. That’s it, really. Just looking for a higher purpose with my work in music.

Lucas Perry: David, do you have any reactions?

David Pearce: Well, when I encountered Utility, yes, I was thrilled. As you know, essentially I’m a writer writing in quite heavy sub-academic prose. Sam’s work, I felt, helps give people a glimpse of our glorious future, paradise engineering. As you know, the reviews were extremely favorable. I’m not an expert critic or anything like that. I was just essentially happy and thrilled at the thought. It deserves to be mainstream. It’s really difficult, I think, to actually evoke the glorious future we are talking about. I mean, I can write prose, but in some sense music can evoke paradise better, at least for many people, than prose.

And it continues on. I highly recommend listening to the whole podcast: it is wonderfully edited and musical pieces referenced in the interview are brought up in real time for you to listen to. Barker also made a playlist of songs specifically for this podcast, which are played during the second half of the recording. It is delightful to listen to music that you know was produced with the explicit purpose of increasing your wellbeing. A wholesome message at last! Amazing art inspired by the ideology of Paradise Engineering, arriving near you… very soon.



As an aside, I think that shared visions of paradise are really essential for solving coordination problems. So…

Please join me in putting on Barker’s track Paradise Engineering, closing your eyes, and imagining- in detail- what the creation of an Institute for Paradise Engineering on a grand scale would look like. What would a positive Manhattan Project of Consciousness entail? What is the shortest path for us to create such a large-scale initiative?

By the way: the song is only 4 minutes long. So its duration is perfect for you to use as a guiding and grounding piece of media for a positive DMT trip. Press “play” immediately after you vaporize the DMT, sit back, relax, and try to render in your mind a posthuman paradise in which Full-Spectrum Supersentient Superintelligence has won and the threat of Pure Replicators has been averted. If you do this, please let me know what you experience as a result.


Ps. It’s worth noting that Barker’s conception of art is highly aligned with QRI’s view of what art could be like. See, in particular, models 4 through 8 in our article titled Harmonic Society.


Featured image by Michael Aaron Coleman

QRI’s FAQ

These are the answers to the most Frequently Asked Questions about the Qualia Research Institute. (See also: the glossary).


(Organizational) Questions About the Qualia Research Institute

  • What type of organization is QRI?

    • QRI is a nonprofit research group studying consciousness based in San Francisco, California. We are a registered 501(c)(3) organization.

  • What is the relationship between QRI, Qualia Computing, and Opentheory?

    • Qualia Computing and Opentheory are the personal blogs of QRI co-founders Andrés Gómez Emilsson and Michael Johnson, respectively. While QRI was in its early stages, all original QRI research was initially published on these two platforms. However, from August 2020 onward, this is shifting to a unified pipeline centered on QRI’s website.

  • Is QRI affiliated with an academic institution or university?

    • Although QRI does collaborate regularly with university researchers and laboratories, we are an independent research organization. Put simply, QRI is independent because we didn’t believe we could build the organization we wanted and needed to build within the very real constraints of academia. These constraints include institutional pressure to work on conventional projects, to optimize for publication metrics, and to clear various byzantine bureaucratic hurdles. It also includes professional and social pressure to maintain continuity with old research paradigms, to do research within an academic silo, and to pretend to be personally ignorant of altered states of consciousness. It’s not that good research cannot happen under these conditions, but we believe good consciousness research happens despite the conditions in academia, not because of them, and the best use of resources is to build something better outside of them.

  • How does QRI align with the values of EA?

    • Effective Altruism (EA) is a movement that uses evidence and reason to figure out how to do the most good. QRI believes this aesthetic is necessary and important for creating a good future. We also believe that if we want to do the most good, foundational research on the nature of the good is of critical importance. Two frames we offer are Qualia Formalism and Sentientism. Qualia Formalism is the claim that experience has a precise mathematical description, that a formal account of experience should be the goal of consciousness research. Sentientism is the claim that value and disvalue are entirely expressed in the nature and quality of conscious experiences. We believe EA is enriched by both Qualia Formalism and Sentientism.

  • What would QRI do with $10 billion?

    • Currently, QRI is a geographically distributed organization with access to commercial-grade neuroimaging equipment. The first thing we’d do with $10 billion is set up a physical headquarters for QRI and buy professional-grade neuroimaging devices (fMRI, MEG, PET, etc.) and neurostimulation equipment. We’d also hire teams of full-time physicists, mathematicians, electrical engineers, computer scientists, neuroscientists, chemists, philosophers, and artists. We’ve accomplished a great deal on a shoestring budget, but it would be hard to overestimate how significant being able to build deep technical teams and related infrastructure around core research threads would be for us (and, we believe, for the growing field of consciousness research). Scaling is always a process and we estimate our ‘room for funding’ over the next year is roughly ~$10 million. However, if we had sufficiently deep long-term commitments, we believe we could successfully scale both our organization and research paradigm into a first-principles approach for decisively diagnosing and curing most forms of mental illness. We would continue to run studies and experiments, collect interesting data about exotic and altered states of consciousness, pioneer new technologies that help eliminate involuntary suffering, and develop novel ways to enable conscious beings to safely explore the state-space of consciousness.

Questions About Our Research Approach

  • What differentiates QRI from other research groups studying consciousness?

    • The first major difference is that QRI breaks down “solving consciousness” into discrete subtasks; we’re clear about what we’re trying to do, which ontologies are relevant for this task, and what a proper solution will look like. This may sound like a small thing, but an enormous amount of energy is wasted in philosophy by not being clear about these things. This lets us “actually get to work.”

    • Second, our focus on valence is rare in the field of consciousness studies. A core bottleneck in understanding consciousness is determining what its ‘natural kinds’ are: terms which carve reality at the joints. We believe emotional valence (the pleasantness/unpleasantness of an experience) is one such natural kind, and this gives us a huge amount of information about phenomenology. It also offers a clean bridge for interfacing with (and improving upon) the best neuroscience.

    • Third, QRI takes exotic states of consciousness extremely seriously whereas most research groups do not. An analogy we make here is that ignoring exotic states of consciousness is similar to people before the scientific enlightenment thinking that they can understand the nature of energy, matter, and the physical world just by studying it at room temperature while completely ignoring extreme states such as what’s happening in the sun, black holes, plasma, or superfluid helium. QRI considers exotic states of consciousness as extremely important datapoints for reverse-engineering the underlying formalism for consciousness.

    • Lastly, we have a focus on precise, empirically testable predictions, which is rare in philosophy of mind. Any good theory of consciousness should also contribute to advancements in neuroscience. Likewise, any good theory of neuroscience should contribute to novel, bold, falsifiable predictions, and blueprints for useful things, such as new forms of therapy. Having such a full-stack approach to consciousness which does each of those two things is thus an important marker that “something interesting is going on here” and is simply very useful for testing and improving theory.

  • What methodologies are you using? How do you actually do research? 

    • QRI has three core areas of research: philosophy, neuroscience, and neurotechnology 

      • Philosophy: Our philosophy research is grounded in the eight problems of consciousness. This divide-and-conquer approach lets us explore each subproblem independently, while being confident that when all piecemeal solutions are added back together, they will constitute a full solution to consciousness.

      • Neuroscience: We’ve done original synthesis work on combining several cutting-edge theories of neuroscience (the free energy principle, the entropic brain, and connectome-specific harmonic waves) into a unified theory of Bayesian emotional updating; we’ve also built the world’s first first-principles method for quantifying emotional valence from fMRI. More generally, we focus on collecting high valence neuroimaging datasets and developing algorithms to analyze, quantify, and visualize them. We also do extensive psychophysics research, focusing on both the fine-grained cognitive-emotional effects of altered states, and how different types of sounds, pictures, body vibrations, and forms of stimulation correspond with low and high valence states of consciousness.

      • Neurotechnology: We engage in both experimentation-driven exploration, tracking the phenomenological effects of various interventions, as well as theory-driven development. In particular, we’re prototyping a line of neurofeedback tools to help treat mental health disorders.

  • What does QRI hope to do over the next 5 years? Next 20 years?

    • Over the next five years, we intend to further our neurotechnology to the point that we can treat PTSD (post-traumatic stress disorder), especially treatment-resistant PTSD. We intend to empirically verify or falsify the symmetry theory of valence. If it is falsified, we will search for a new theory that ties together all of the empirical evidence we have discovered. We aim to create an Effective Altruist cause area regarding the reduction of intense suffering as well as the study of very high valence states of consciousness.

    • Over the next 20 years, we intend to become a world-class research center where we can put the discipline of “paradise engineering” (as described by philosopher David Pearce) on firm academic grounds.

Questions About Our Mission

  • How can understanding the science of consciousness make the world a better place?

    • Understanding consciousness would improve the world in a tremendous number of ways. One obvious outcome would be the ability to better predict what types of beings are conscious—from locked-in patients to animals to pre-linguistic humans—and what their experiences might be like.

    • We also think it’s useful to break down the benefits of understanding consciousness in three ways: reducing the amount of extreme suffering in the world, increasing the baseline well-being of conscious beings, and achieving new heights for what conscious states are possible to experience.

    • Without a good theory of valence, many neurological disorders will remain completely intractable. Disorders such as fibromyalgia, complex regional pain syndrome (CRPS), migraines, and cluster headaches are all currently medical puzzles and yet have incredibly negative effects on people’s livelihoods. We think that a mathematical theory of valence will explain why these things feel so bad and what the shortest path for getting rid of them looks like. Besides valence-related disorders, nearly all mental health disorders, from clinical depression and PTSD to schizophrenia and anxiety disorders, will become better understood as we discover the structure of conscious experience.

    • We also believe that many (though not all) of the zero-sum games people play are the products of inner states of dissatisfaction and suffering. Broadly speaking, people who have a surplus of cognitive and emotional energy tend to play more positive sum games, are more interested in cooperation, and are very motivated to do so. We think that studying states such as those induced by MDMA that combine both high valence and a prosocial behavior mindset can radically alter the game theoretical landscape of the world for the better.

  • What is the end goal of QRI? What does QRI’s perfect world look like?

    • In QRI’s perfect future:

      • There is no involuntary suffering and all sentient beings are animated by gradients of bliss,

      • Research on qualia and consciousness is done at a very large scale for the purpose of mapping out the state-space of consciousness and understanding its computational and intrinsic properties (we think that we’ve barely scratched the surface of knowledge about consciousness),

      • We have figured out the game-theoretical subtleties in order to make that world dynamic yet stable: radically positive, without just making it fully homogeneous and stuck in a local maxima.

Questions About Getting Involved

  • How can I follow QRI’s work?

    • You can start by signing up for our newsletter! This is by far our most important communication channel. We also have a Facebook page, Twitter account, and Linkedin page. Lastly, we share some exclusive tidbits of ideas and thoughts with our supporters on Patreon.

  • How can I get involved with QRI?

    • The best ways to help QRI are to:

      • Donate to help support our work.

      • Read and engage with our research. We love critical responses to our ideas and encourage you to reach out if you have an interesting thought!

      • Spread the word to friends, potential donors, and people that you think would make great collaborators with QRI.

      • Check out our volunteer page to find more detailed ways that you can contribute to our mission, from independent research projects to QRI content creation.

Questions About Consciousness

  • What assumptions about consciousness does QRI have? What theory of consciousness does QRI support?

    • The most important assumption that QRI is committed to is Qualia Formalism, the hypothesis that the internal structure of our subjective experience can be represented precisely by mathematics. We are also Valence Realists: we believe valence (how good or bad an experience feels) is a real and well-defined property of conscious states. Besides these positions, we are fairly agnostic and everything else is an educated guess useful for pragmatic purposes.

  • What does QRI think of functionalism?

    • QRI thinks that functionalism takes many high-quality insights about how systems work and combines them in such a way that both creates confusion and denies the possibility of progress. In its raw, unvarnished form, functionalism is simply skepticism about the possibility of Qualia Formalism. It is simply a statement that “there is nothing here to be formalized; consciousness is like élan vital, confusion to be explained away.” It’s not actually a theory of consciousness; it’s an anti-theory. This is problematic in at least two ways:

      • 1. By assuming consciousness has formal structure, we’re able to make novel predictions that functionalism cannot (see e.g. QRI’s Symmetry Theory of Valence, and Quantifying Bliss). A few hundred years ago, there were many people who doubted that electromagnetism had a unified, elegant, formal structure, and this was a reasonable position at the time. However, in the age of the iPhone, skepticism that electricity is a “real thing” that can be formalized is no longer reasonable. Likewise, everything interesting and useful QRI builds using the foundation of Qualia Formalism stretches functionalism’s credibility thinner and thinner.

      • 2. Insofar as functionalism is skeptical about the formal existence of consciousness, it’s skeptical about the formal existence of suffering and all sentience-based morality. In other words, functionalism is a deeply amoral theory, which if taken seriously dissolves all sentience-based ethical claims. This is due to there being an infinite number of functional interpretations of a system: there’s no ground-truth fact of the matter about what algorithm a physical system is performing, about what information-processing it’s doing. And if there’s no ground-truth about which computations or functions are present, but consciousness arises from these computations or functions, then there’s no ground-truth about consciousness, or things associated with consciousness, like suffering. This is a strange and subtle point, but it’s very important. This point alone is not sufficient to reject functionalism: if the universe is amoral, we shouldn’t hold a false theory of consciousness in order to try to force reality into some ethical framework. But in debates about consciousness, functionalists should be up-front that functionalism and radical moral anti-realism is a package deal, that inherent in functionalism is the counter-intuitive claim that just as we can reinterpret which functions a physical system is instantiating, we can reinterpret what qualia it’s experiencing and whether it’s suffering.

    • For an extended argument, see Against Functionalism.

  • What does QRI think of panpsychism?

    • At QRI, we hold a position that is close to dual-aspect monism or neutral monism, which states that the universe is composed of one kind of thing that is neutral, and that both the mental and physical are two features of this same substance. One of the motivating factors for holding this view is that if there is deep structure in the physical, then there should be a corresponding deep structure to phenomenal experience. And we can tie this together with physicalism in the sense that the laws of physics ultimately describe fields of qualia. While there are some minor disagreements between dual-aspect monism and panpsychism, we believe that our position mostly fits well with a panpsychist view—that phenomenal properties are a fundamental feature of the world and aren’t spontaneously created only when a certain computation is being performed.

    • However, even with this view, there still are very important questions, such as: what makes a unified conscious experience? Where does one experience end and another begin? Without considering these problems in the light of Qualia Formalism, it is easy to tie animism into panpsychism and believe that inanimate objects like rocks, sculptures, and pieces of wood have spirits or complex subjective experiences. At QRI, we disagree with this and think that these types of objects might have extremely small pockets of unified conscious experience, but will mostly be masses of micro-qualia that are not phenomenally bound into some larger experience.

  • What does QRI think of IIT (Integrated Information Theory)?

    • QRI is very grateful for IIT because it is the first mainstream theory of consciousness that satisfies a Qualia Formalist account of experience. IIT says (and introduced the idea!) that for every conscious experience, there is a corresponding mathematical object such that the mathematical features of that object are isomorphic to the properties of the experience. QRI believes that without this idea, we cannot solve consciousness in a meaningful way, and we consider the work of Giulio Tononi to be one of our core research lineages. That said, we are not in complete agreement with the specific mathematical and ontological choices of IIT, and we think it may be trying to ‘have its cake and eat it too’ with regard to functionalism vs physicalism. For more, see Sections III-V of Principia Qualia.

    • We make no claim that some future version of IIT, particularly something more directly compatible with physics, couldn’t cleanly address our objections, and see a lot of plausible directions and promise in this space.

  • What does QRI think of the free energy principle and predictive coding?

    • On our research lineages page, we list the work of Karl Friston as one of QRI’s core research lineages. We consider the free energy principle (FEP), as well as related research such as predictive coding, active inference, the Bayesian brain, and cybernetic regulation, as an incredibly elegant and predictive story of how brains work. Friston’s idea also forms a key part of the foundation for QRI’s theory of brain self-organization and emotional updating, Neural Annealing.

    • However, we don’t think that the free energy principle is itself a theory of consciousness, as it suffers from many of the shortcomings of functionalism: we can tell the story about how the brain minimizes free energy, but we don’t have a way of pointing at the brain and saying *there* is the free energy! The FEP is an amazing logical model, but it’s not directly connected to any physical mechanism. It is a story that “this sort of abstract thing is going on in the brain” without a clear method of mapping this abstract story to reality.

    • Friston has supported this functionalist interpretation of his work, noting that he sees consciousness as a process of inference, not a thing. That said, we are very interested in his work on calculating the information geometry of Markov blankets, as this could provide a tacit foundation for a formalist account of qualia under the FEP. Regardless of this, though, we believe Friston’s work will play a significant role in a future science of mind.

  • What does QRI think of global workspace theory?

    • The global workspace theory (GWT) is a cluster of empirical observations that seem to be very important for understanding what systems in the brain contribute to a reportable experience at a given point in time. The global workspace theory is a very important clue for answering questions of what philosophers call Access Consciousness, or the aspects of our experience on which we can report.

    • However, QRI does not consider the global workspace theory to be a full theory of consciousness. Parts of the brain that are not immediately contributing to the global workspace may be composed of micro qualia, or tiny clusters of experience. They’re obviously impossible to report on, but they are still relevant to the study of consciousness. In other words, just because a part of your brain wasn’t included in the instantaneous global workspace, doesn’t mean that it can’t suffer or it can’t experience happiness. We value global workspace research because questions of Access Consciousness are still very critical for a full theory of consciousness.

  • What does QRI think of higher-order theories of consciousness?

    • QRI is generally opposed to theories of consciousness that equate consciousness with higher order reflective thought and cognition. Some of the most intense conscious experiences are pre-reflective or unreflective such as blind panic, religious ecstasy, experiences of 5-MeO-DMT, and cluster headaches. In these examples, there is not much reflectivity nor cognition going on, yet they are intensely conscious. Therefore, we largely reject any attempt to define consciousness with a higher-order theory.

  • What is the relationship between evolution and consciousness?

    • The relationship between evolution and consciousness is very intricate and subtle. An eliminativist approach arrives at the simple idea that information processing of a certain type is evolutionarily advantageous, and perhaps we can call this consciousness. However, with a Qualia Formalist approach, it seems instead that the very properties of the mathematical object isomorphic to consciousness can play key roles (either causal or in terms of information processing) that make it advantageous for organisms to recruit consciousness.

    • If you don’t realize that consciousness maps onto a mathematical object with properties, you may think that you understand why consciousness was recruited by natural selection, but your understanding of the topic would be incomplete. In other words, to have a full understanding of why evolution recruited consciousness, you need to understand what advantages the mathematical object has. One very important feature of consciousness is its capacity for binding. For example, the unitary nature of experience—the fact that we can experience a lot of qualia simultaneously—may be a key feature of consciousness that accelerates the process of finding solutions to constraint satisfaction problems. In turn, evolution would hence have a reason to recruit states of consciousness for computation. So rather than thinking of consciousness as identical with the computation that is going on in the brain, we can think of it as a resource with unique computational benefits that are powerful and dynamic enough to make organisms that use it more adaptable to their environments.

  • Does QRI think that animals are conscious?

    • QRI thinks there is a very high probability that every animal with a nervous system is conscious. We are agnostic about unified consciousness in insects, but we consider it very likely. We believe research on animal consciousness has relevance when it comes to treating animals ethically. Additionally, we do think that the ethical importance of consciousness has more to do with the pleasure-pain axis (valence), rather than cognitive ability. In that sense, the suffering of non-human animals may be just as morally relevant, if not more relevant than humans. The cortex seems to play a largely inhibitory role for emotions, such that the larger the cortex is, the better we’re able to manage and suppress our emotions. Consequently, animals whose cortices are less developed than ours may experience pleasure and pain in a more intense and uncontrollable way, like a pre-linguistic toddler.

  • Does QRI think that plants are conscious?

    • We think it’s very unlikely that plants are conscious. The main reason is that they lack an evolutionary reason to recruit consciousness. Large-scale phenomenally bound experience may be very energetically expensive, and plants don’t have much energy to spare. Additionally, plants have thick cellulose walls that separate individual cells, making it very unlikely that plants can solve the binding problem and therefore create unified moments of experience.

  • Why do some people seek out pain?

    • This is a very multifaceted question. As a whole, we postulate that in the vast majority of cases, when somebody may be nominally pursuing pain or suffering, they’re actually trying to reduce internal dissonance in pursuit of consonance or they’re failing to predict how pain will actually feel. For example, when a person hears very harsh music, or enjoys extremely spicy food, this can be explained in terms of either masking other unpleasant sensations or raising the energy parameter of experience, the latter of which can lead to neural annealing: a very pleasant experience that manifests as consonance in the moment.

  • I sometimes like being sad. Is QRI trying to take that away from me?

    • Before we try to ‘fix’ something, it’s important to understand what it’s trying to do for us. Sometimes suffering leads to growth; sometimes creating valuable things involves suffering. Sometimes, ‘being sad’ feels strangely good. Insofar as suffering is doing good things for us, or for the world, QRI advocates a light touch (see Chesterton’s fence). However, we also suggest two things:

      • 1. Most kinds of melancholic or mixed states of sadness usually are pursued for reasons that cash out as some sort of pleasure. Bittersweet experiences are far more preferable than intense agony or deep depression. If you enjoy sadness, it’s probably because there’s an aspect of your experience that is enjoyable. If it were possible to remove the sad part of your experience while maintaining the enjoyable part of it, you might be surprised to find that you prefer this modified experience more than the original one.

      • 2. There are kinds of sadness and suffering that are just bad, that degrade us as humans, and would be better to never feel. QRI doesn’t believe in forcibly taking away voluntary suffering, or pushing bliss on people. But we would like to live in a world where people can choose to avoid such negative states, and on the margin, we believe it would be better for humanity for more people to be joyful, filled with a deep sense of well-being.

  • If dissonance is so negative, why is dissonance so important in music?

    • When you listen to very consonant music or consonant tones, you will quickly adapt to these sounds and get bored of them. This has nothing to do with consonance itself being unpleasant and everything to do with learning in the brain. Whenever you experience the same stimuli repeatedly, most brains will trigger a boredom mechanism and add dissonance of its own in order to make you enjoy the stimuli less or simply inhibit it, not allowing you to experience it at all. Semantic satiation is a classic example of this where repeating the same word over and over will make it lose its meaning. For this reason, to trigger many high valence states of consciousness consecutively, you need contrast. In particular, music works with gradients of consonance and dissonance, and in most cases, moving towards consonance is what feels good rather than the absolute value of consonance. Music tends to feel the best when you mix a high absolute value of consonance together with a very strong sense of moving towards an even higher absolute value of consonance. Playing some levels of dissonance during a song will later enhance the enjoyment of the more consonant parts such as the chorus of songs, which are reported to be the most euphoric parts of song and typically are extremely consonant.

  • What is QRI’s perspective on AI and AI safety research?

    • QRI thinks that consciousness research is critical for addressing AI safety. Without a precise way of quantifying an action’s impact on conscious experiences, we won’t be able to guarantee that an AI system has been programmed to act benevolently. Also, certain types of physical systems that perform computational tasks may be experiencing negative valence without any outside observer being aware of it. We need a theory of what produces unpleasant experiences to avoid inadvertently creating superintelligences that suffer intensely in the process of solving important problems or accidentally inflict large-scale suffering.

    • Additionally, we think that a very large percentage of what will make powerful AI dangerous is that the humans programming these machines and using these machines may be reasoning from states of loneliness, resentment, envy, or anger. By discovering ways to help humans transition away from these states, we can reduce the risks of AI by creating humans that are more ethical and aligned with consciousness more broadly. In short: an antidote for nihilism could lead to a substantial reduction in existential risk.

    • One way to think about QRI and AI safety is that the world is building AI, but doesn’t really have a clear, positive vision of what to do with AI. Lacking this, the default objective becomes “take over the world.” We think a good theory of consciousness could and will offer new visions of what kind of futures are worth building—new Schelling points that humanity (and AI researchers) could self-organize around.

  • Can digital computers implementing AI algorithms be conscious?

    • QRI is agnostic about this question. We have reasons to believe that digital computers in their current form cannot solve the phenomenal binding problem. Most of the activity in digital computers can be explained in a stepwise fashion in terms of localized processing of bits of information. Because of this, we believe that current digital computers could be creating fragments of qualia, but are unlikely to be creating strongly globally bound experiences. So, we consider the consciousness of digital computers unlikely, although given our current uncertainty over the Binding Problem (or alternatively framed, the Boundary Problem), this assumption is lightly held. In the previous question, when we write that “certain types of physical systems that perform computational tasks may be experiencing negative valence”, we assume that these hypothetical computers have some type of unified conscious experience as a result of having solved the phenomenal binding problem. For more on this topic, see: “What’s Out There?

  • How much mainstream recognition has QRI’s work received, either for this line of research or others? Has it published in peer-reviewed journals, received any grants, or garnered positive reviews from other academics?

    • We are collaborating with researchers from Johns Hopkins University and Stanford University on several studies involving the analysis of neuroimaging data of high-valence states of consciousness. Additionally, we are currently preparing two publications for peer-reviewed journals on topics from our core research areas. Michael Johnson will be presenting at this year’s MCS seminar series, along with Karl Friston, Anil Seth, Selen Atasoy, Nao Tsuchiya, and others; Michael Johnson, Andrés Gómez Emilsson, and Quintin Frerichs have also given invited talks at various east-coast colleges (Harvard, MIT, Princeton, and Dartmouth).

    • Some well-known researchers and intellectuals that are familiar and think positively about our work include: Robin Carhart-Harris, Scott Alexander, David Pearce, Steven Lehar, Daniel Ingram, and more. Scott Alexander acknowledged that QRI put together the paradigms that contributed to Friston’s integrative model of how psychedelics work before his research was published. Our track record so far has been to foreshadow (by several years in advance) key discoveries later proposed and accepted in mainstream academia. Given our current research findings, we expect this trend to continue in the years to come.

Miscellaneous

  • How does QRI know what is best for other people/animals? What about cultural relativism?

    • We think that, to a large extent, people and animals work under the illusion that they are pursuing intentional objects, states of the external environment, or relationships that they may have with the external environment. However, when you examine these situations closely, you realize that what we actually pursue are states of high valence triggered by external circumstances. There may be evolutionary and cultural selection pressures that push us toward self-deception as to how we actually function. And we consider it negative to have these selection pressures makes us less self-aware because it often focuses our energy on unpleasant, destructive, or fruitless strategies. QRI hopes to support people in fostering more self-awareness, which can come through experiments with one’s own consciousness, like meditation, as well as through the deeper theoretical understanding of what it is that we actually want.

  • How central is David Pearce’s work to the work of the QRI?

    • We consider David Pearce to be one of our core lineages. We particularly value his contribution to valence realism, the insistence that states of consciousness come with an overall valence, and that this is very morally relevant. We also consider David Pearce to be very influential in philosophy of mind; Pearce, for instance, coined the phrase ‘tyranny of the intentional object’, the title of a core QRI piece of the same name. We have been inspired by Pearce’s descriptions for what any scientific theory of consciousness should be able to explain, as well as his particular emphasis on the binding problem. David’s vision of a world animated by ‘gradients of bliss’ has also been very generative as a normative thought experiment which integrates human and non-human well-being. We do not necessarily agree with all of David Pearce’s work, but we respect him as an insightful and vivid thinker who has been brave enough to actually take a swing at describing utopia and who we believe is far ahead of his time.

  • What does QRI think of negative utilitarianism?

    • There’s general agreement within QRI that intense suffering is an extreme moral priority, and we’ve done substantial work on finding simple ways of getting rid of extreme suffering (with our research inspiring at least one unaffiliated startup to date). However, we find it premature to strongly endorse any pre-packaged ethical theory, especially because none of them are based on any formalism, but rather an ungrounded concept of ‘utility’. The value of information here seems enormous, and we hope that we can get to a point where the ‘correct’ ethical theory may simply ‘pop out of the equations’ of reality. It’s also important to highlight the fact that common versions and academic formulations of utilitarianism seem to be blind to many subtleties concerning valence. For example, they do not distinguish between mixed states of consciousness where you have extreme pleasure combined with extreme suffering in such a way that you judge the experience to be neither entirely suffering nor entirely happiness and states of complete neutrality, such as extreme white noise. Because most formulations of utilitarianism do not distinguish between them, we are generally suspicious of the idea that philosophers of ethics have considered all of the relevant attributes of consciousness in order to make accurate judgments about morality.

  • What does QRI think of philosophy of mind departments?

    • We believe that the problems that philosophy of mind departments address tend to be very disconnected from what truly matters from an ethical, moral, and philosophical point of view. For example, there is little appreciation of the value of bringing mathematical formalisms into discussions about the mind, or what that might look like in practice. Likewise there is close to no interest in preventing extreme suffering nor understanding its nature. Additionally, there is usually a disregard for extreme states of positive valence, and strange or exotic experiences in general. It may be the case that there are worthwhile things happening in departments and classes creating and studying this literature, but we find them characterized by processes which are unlikely to produce progress on their nominal purpose, creating a science of mind.

    • In particular, in academic philosophy of mind, we’ve seen very little regard for producing empirically testable predictions. There are millions of pages written about philosophy of mind, but the number of pages that provide precise, empirically testable predictions is quite thin.

  • What therapies does QRI recommend for depression, anxiety, and chronic pain?

    • At QRI, we do not make specific recommendations to individuals, but rather point to areas of research that we consider to be extremely important, tractable, and neglected, such as anti-tolerance drugs, neural annealing techniques, frequency specific microcurrent for kidney stone pain, and N,N-DMT and other tryptamines for cluster headaches and migraines.

  • Why does QRI think it’s so important to focus on ending extreme suffering? 

    • QRI thinks ending extreme suffering is important, tractable, and neglected. It’s important because of the logarithmic scales of pleasure and pain—the fact that extreme suffering is far worse by orders of magnitude than what people intuitively believe. It’s tractable because there are many types of extreme suffering that have existing solutions that are fairly trivial or at least have a viable path for being solved with moderately funded research programs. And it’s neglected mostly because people are unaware of the existence of these states, though not necessarily because of their rarity. For example, 10% of the population experiences kidney stones at some point in their life, but for reasons having to do with trauma, PTSD, and the state-dependence of memory, even people who have suffered from kidney stones do not typically end up dedicating their time or resources toward eradicating them.

    • It’s also likely that if we can meaningfully improve the absolute worst experiences, much of the knowledge we’ll gain in that process will translate into other contexts. In particular, we should expect to figure out how to make moderately depressed people happier, fix more mild forms of pain, improve the human hedonic baseline, and safely reach extremely great peak states. Mood research is not a zero-sum game. It’s a web of synergies.



Many thanks to Andrew Zuckerman, Mackenzie Dion, and Mike Johnson for their collaboration in putting this together. Featured image is QRI’s logo – animated by Hunter Meyer.

One for All and All for One

By David Pearce (response to Quora question: “What does David Pearce think of closed, empty, and open individualism?“)


Vedanta teaches that consciousness is singular, all happenings are played out in one universal consciousness and there is no multiplicity of selves.

 

– Erwin Schrödinger, ‘My View of the World’, 1951

Enlightenment came to me suddenly and unexpectedly one afternoon in March [1939] when I was walking up to the school notice board to see whether my name was on the list for tomorrow’s football game. I was not on the list. And in a blinding flash of inner light I saw the answer to both my problems, the problem of war and the problem of injustice. The answer was amazingly simple. I called it Cosmic Unity. Cosmic Unity said: There is only one of us. We are all the same person. I am you and I am Winston Churchill and Hitler and Gandhi and everybody. There is no problem of injustice because your sufferings are also mine. There will be no problem of war as soon as you understand that in killing me you are only killing yourself.

 

– Freeman Dyson, ‘Disturbing the Universe’, 1979

Common sense assumes “closed” individualism: we are born, live awhile, and then die. Common sense is wrong about most things, and the assumption of enduring metaphysical egos is true to form. Philosophers sometimes speak of the “indiscernibility of identicals”. If a = b, then everything true of a is true of b. This basic principle of logic is trivially true. Our legal system, economy, politics, academic institutions and personal relationships assume it’s false. Violation of elementary logic is a precondition of everyday social life. It’s hard to imagine any human society that wasn’t founded on such a fiction. The myth of enduring metaphysical egos and “closed” individualism also leads to a justice system based on scapegoating. If we were accurately individuated, then such scapegoating would seem absurd.

Among the world’s major belief-systems, Buddhism comes closest to acknowledging “empty” individualism: enduring egos are a myth (cf. “non-self” or Anatta – Wikipedia). But Buddhism isn’t consistent. All our woes are supposedly the product of bad “karma”, the sum of our actions in this and previous states of existence. Karma as understood by Buddhists isn’t the deterministic cause and effect of classical physics, but rather the contribution of bad intent and bad deeds to bad rebirths.

Among secular philosophers, the best-known defender of (what we would now call) empty individualism minus the metaphysical accretions is often reckoned David Hume. Yet Hume was also a “bundle theorist”, sceptical of the diachronic and the synchronic unity of the self. At any given moment, you aren’t a unified subject (“For my part, when I enter most intimately into what I call myself, I always stumble on some particular perception or other, of heat, cold, light or shade, love or hatred, pain or pleasure. I can never catch myself at any time without a perception, and can never observe anything but the perception” (‘On Personal Identity’, A Treatise of Human Nature, 1739)). So strictly, Hume wasn’t even an empty individualist. Contrast Kant’s “transcendental unity of apperception”, aka the unity of the self.

An advocate of common-sense closed individualism might object that critics are abusing language. Thus “Winston Churchill”, say, is just the name given to an extended person born in 1874 who died in 1965. But adhering to this usage would mean abandoning the concept of agency. When you raise your hand, a temporally extended entity born decades ago doesn’t raise its collective hand. Raising your hand is a specific, spatio-temporally located event. In order to make sense of agency, only a “thin” sense of personal identity can work.

According to “open” individualism, there exists only one numerically identical subject who is everyone at all times. Open individualism was christened by philosopher Daniel Kolak, author of I Am You (2004). The roots of open individualism are ancient, stretching back at least to the Upanishads. The older name is monopsychism. I am Jesus, Moses and Einstein, but also Hitler, Stalin and Genghis Khan. And I am also all pigs, dinosaurs and ants: subjects of experience date to the late Pre-Cambrian, if not earlier.

My view?
My ethical sympathies lie with open individualism; but as it stands, I don’t see how a monopsychist theory of identity can be true. Open or closed individualism might (tenuously) be defensible if we were electrons (cfOne-electron universe – Wikipedia). However, sentient beings are qualitatively and numerically different. For example, the half-life of a typical protein in the brain is an estimated 12–14 days. Identity over time is a genetically adaptive fiction for the fleetingly unified subjects of experience generated by the CNS of animals evolved under pressure of natural selection (cfWas Parfit correct we’re not the same person that we were when we were born?). Even memory is a mode of present experience. Both open and closed individualism are false.

By contrast, the fleeting synchronic unity of the self is real, scientifically unexplained (cfthe binding problem) and genetically adaptive. How a pack of supposedly decohered membrane-bound neurons achieves a classically impossible feat of virtual world-making leads us into deep philosophical waters. But whatever the explanation, I think empty individualism is true. Thus I share with my namesakes – the authors of The Hedonistic Imperative (1995) – the view that we ought to abolish the biology of suffering in favour of genetically-programmed gradients of superhuman bliss. Yet my namesakes elsewhere in tenselessly existing space-time (or Hilbert space) physically differ from the multiple David Pearces (DPs) responding to your question. Using numerical superscripts, e.g. DP^564356, DP^54346 (etc), might be less inappropriate than using a single name. But even “DP” here is misleading because such usage suggests an enduring carrier of identity. No such enduring carrier exists, merely modestly dynamically stable patterns of fundamental quantum fields. Primitive primate minds were not designed to “carve Nature at the joints”.

However, just because a theory is true doesn’t mean humans ought to believe in it. What matters are its ethical consequences. Will the world be a better or worse place if most of us are closed, empty or open individualists? Psychologically, empty individualism is probably the least emotionally satisfying account of personal identity – convenient when informing an importunate debt-collection company they are confusing you with someone else, but otherwise a recipe for fecklessness, irresponsibility and overly-demanding feats of altruism. Humans would be more civilised if most people believed in open individualism. The factory-farmed pig destined to be turned into a bacon sandwich is really youthe conventional distinction between selfishness and altruism collapses. Selfish behaviour is actually self-harming. Not just moral decency, but decision-theoretic rationality dictates choosing a veggie burger rather than a meat burger. Contrast the metaphysical closed individualism assumed by, say, the Less Wrong Decision Theory FAQ. And indeed, all first-person facts, not least the distress of a horribly abused pig, are equally real. None are ontologically privileged. More speculatively, if non-materialist physicalism is true, then fields of subjectivity are what the mathematical formalism of quantum field theory describes. The intrinsic nature argument proposes that only experience is physically real. On this story, the mathematical machinery of modern physics is transposed to an idealist ontology. This conjecture is hard to swallow; I’m agnostic.

Bern, 20. 5. 2003 Copyright Peter Mosimann: Kuppel

One for all, all for one” – unofficial motto of Switzerland.

Speculative solutions to the Hard Problem of consciousness aside, the egocentric delusion of Darwinian life is too strong for most people to embrace open individualism with conviction. Closed individualism is massively fitness-enhancing (cfAre you the center of the universe?). Moreover, temperamentally happy people tend to have a strong sense of enduring personal identity and agency; depressives have a weaker sense of personhood. Most of the worthwhile things in this world (as well as its biggest horrors) are accomplished by narcissistic closed individualists with towering egos. Consider the transhumanist agenda. Working on a cure for the terrible disorder we know as aging might in theory be undertaken by empty individualists or open individualists; but in practice, the impetus for defeating death and aging comes from strong-minded and “selfish” closed individualists who don’t want their enduring metaphysical egos to perish. Likewise, the well-being of all sentience in our forward light-cone – the primary focus of most DPs – will probably be delivered by closed individualists. Benevolent egomaniacs will most likely save the world.

One for all, all for one”, as Alexandre Dumas put it in The Three Musketeers?
Maybe one day: full-spectrum superintelligence won’t have a false theory of personal identity. “Unus pro omnibus, omnes pro uno” is the unofficial motto of Switzerland. It deserves to be the ethos of the universe.

main-qimg-46d38d2ebcea7325a3f29f7ec454096b

The Qualia Explosion

Extract from “Humans and Intelligent Machines: Co-Evolution, Fusion or Replacement?” (talk) by David Pearce

Supersentience: Turing plus Shulgin?

Compared to the natural sciences (cf. the Standard Model in physics) or computing (cf. the Universal Turing Machine), the “science” of consciousness is pre-Galilean, perhaps even pre-Socratic. State-enforced censorship of the range of subjective properties of matter and energy in the guise of a prohibition on psychoactive experimentation is a powerful barrier to knowledge. The legal taboo on the empirical method in consciousness studies prevents experimental investigation of even the crude dimensions of the Hard Problem, let alone locating a solution-space where answers to our ignorance might conceivably be found.

Singularity theorists are undaunted by our ignorance of this fundamental feature of the natural world. Instead, the Singularitarians offer a narrative of runaway machine intelligence in which consciousness plays a supporting role ranging from the minimal and incidental to the completely non-existent. However, highlighting the Singularity movement’s background assumptions about the nature of mind and intelligence, not least the insignificance of the binding problem to AGI, reveals why FUSION and REPLACEMENT scenarios are unlikely – though a measure of “cyborgification” of sentient biological robots augmented with ultrasmart software seems plausible and perhaps inevitable.

If full-spectrum superintelligence does indeed entail navigation and mastery of the manifold state-spaces of consciousness, and ultimately a seamless integration of this knowledge with the structural understanding of the world yielded by the formal sciences, then where does this elusive synthesis leave the prospects of posthuman superintelligence? Will the global proscription of radically altered states last indefinitely?

Social prophecy is always a minefield. However, there is one solution to the indisputable psychological health risks posed to human minds by empirical research into the outlandish state-spaces of consciousness unlocked by ingesting the tryptaminesphenylethylaminesisoquinolines and other pharmacological tools of sentience investigation. This solution is to make “bad trips” physiologically impossible – whether for individual investigators or, in theory, for human society as a whole. Critics of mood-enrichment technologies sometimes contend that a world animated by information-sensitive gradients of bliss would be an intellectually stagnant society: crudely, a Brave New World. On the contrary, biotech-driven mastery of our reward circuitry promises a knowledge explosion in virtue of allowing a social, scientific and legal revolution: safe, full-spectrum biological superintelligence. For genetic recalibration of hedonic set-points – as distinct from creating uniform bliss – potentially leaves cognitive function and critical insight both sharp and intact; and offers a launchpad for consciousness research in mind-spaces alien to the drug-naive imagination. A future biology of invincible well-being would not merely immeasurably improve our subjective quality of life: empirically, pleasure is the engine of value-creation. In addition to enriching all our lives, radical mood-enrichment would permit safe, systematic and responsible scientific exploration of previously inaccessible state-spaces of consciousness. If we were blessed with a biology of invincible well-being, exotic state-spaces would all be saturated with a rich hedonic tone.

Until this hypothetical world-defining transition, pursuit of the rigorous first-person methodology and rational drug-design strategy pioneered by Alexander Shulgin in PiHKAL and TiHKAL remains confined to the scientific counterculture. Investigation is risky, mostly unlawful, and unsystematic. In mainstream society, academia and peer-reviewed scholarly journals alike, ordinary waking consciousness is assumed to define the gold standard in which knowledge-claims are expressed and appraised. Yet to borrow a homely-sounding quote from Einstein, “What does the fish know of the sea in which it swims?” Just as a dreamer can gain only limited insight into the nature of dreaming consciousness from within a dream, likewise the nature of “ordinary waking consciousness” can only be glimpsed from within its confines. In order to scientifically understand the realm of the subjective, we’ll need to gain access to all its manifestations, not just the impoverished subset of states of consciousness that tended to promote the inclusive fitness of human genes on the African savannah.

Why the Proportionality Thesis Implies an Organic Singularity

So if the preconditions for full-spectrum superintelligence, i.e. access to superhuman state-spaces of sentience, remain unlawful, where does this roadblock leave the prospects of runaway self-improvement to superintelligence? Could recursive genetic self-editing of our source code repair the gap? Or will traditional human personal genomes be policed by a dystopian Gene Enforcement Agency in a manner analogous to the coercive policing of traditional human minds by the Drug Enforcement Agency?

Even in an ideal regulatory regime, the process of genetic and/or pharmacological self-enhancement is intuitively too slow for a biological Intelligence Explosion to be a live option, especially when set against the exponential increase in digital computer processing power and inorganic AI touted by Singularitarians. Prophets of imminent human demise in the face of machine intelligence argue that there can’t be a Moore’s law for organic robots. Even the Flynn Effect, the three-points-per-decade increase in IQ scores recorded during the 20th century, is comparatively puny; and in any case, this narrowly-defined intelligence gain may now have halted in well-nourished Western populations.

However, writing off all scenarios of recursive human self-enhancement would be premature. Presumably, the smarter our nonbiological AI, the more readily AI-assisted humans will be able recursively to improve our own minds with user-friendly wetware-editing tools – not just editing our raw genetic source code, but also the multiple layers of transcription and feedback mechanisms woven into biological minds. Presumably, our ever-smarter minds will be able to devise progressively more sophisticated, and also progressively more user-friendly, wetware-editing tools. These wetware-editing tools can accelerate our own recursive self-improvement – and manage potential threats from nonfriendly AGI that might harm rather than help us, assuming that our earlier strictures against the possibility of digital software-based unitary minds were mistaken. MIRI rightly call attention to how small enhancements can yield immense cognitive dividends: the relatively short genetic distance between humans and chimpanzees suggests how relatively small enhancements can exert momentous effects on a mind’s general intelligence, thereby implying that AGIs might likewise become disproportionately powerful through a small number of tweaks and improvements. In the post-genomic era, presumably exactly the same holds true for AI-assisted humans and transhumans editing their own minds. What David Chalmers calls the proportionality thesis, i.e. increases in intelligence lead to proportionate increases in the capacity to design intelligent systems, will be vindicated as recursively self-improving organic robots modify their own source code and bootstrap our way to full-spectrum superintelligence: in essence, an organic Singularity. And in contrast to classical digital zombies, superficially small molecular differences in biological minds can result in profoundly different state-spaces of sentience. Compare the ostensibly trivial difference in gene expression profiles of neurons mediating phenomenal sight and phenomenal sound – and the radically different visual and auditory worlds they yield.

Compared to FUSION or REPLACEMENT scenarios, the AI-human CO-EVOLUTION conjecture is apt to sound tame. The likelihood our posthuman successors will also be our biological descendants suggests at most a radical conservativism. In reality, a post-Singularity future where today’s classical digital zombies were superseded merely by faster, more versatile classical digital zombies would be infinitely duller than a future of full-spectrum supersentience. For all insentient information processors are exactly the same inasmuch as the living dead are not subjects of experience. They’ll never even know what it’s like to be “all dark inside” – or the computational power of phenomenal object-binding that yields illumination. By contrast, posthuman superintelligence will not just be quantitatively greater but also qualitatively alien to archaic Darwinian minds. Cybernetically enhanced and genetically rewritten biological minds can abolish suffering throughout the living world and banish experience below “hedonic zero” in our forward light-cone, an ethical watershed without precedent. Post-Darwinian life can enjoy gradients of lifelong blissful supersentience with the intensity of a supernova compared to a glow-worm. A zombie, on the other hand, is just a zombie – even if it squawks like Einstein. Posthuman organic minds will dwell in state-spaces of experience for which archaic humans and classical digital computers alike have no language, no concepts, and no words to describe our ignorance. Most radically, hyperintelligent organic minds will explore state-spaces of consciousness that do not currently play any information-signalling role in living organisms, and are impenetrable to investigation by digital zombies. In short, biological intelligence is on the brink of a recursively self-amplifying Qualia Explosion – a phenomenon of which digital zombies are invincibly ignorant, and invincibly ignorant of their own ignorance. Humans too of course are mostly ignorant of what we’re lacking: the nature, scope and intensity of such posthuman superqualia are beyond the bounds of archaic human experience. Even so, enrichment of our reward pathways can ensure that full-spectrum biological superintelligence will be sublime.


Image Credit: MohammadReza DomiriGanji

Why don’t more effective altruists work on the Hedonistic Imperative?

By David Pearce (in response to a Quora question)

 

Life could be wonderful. Genetically phasing out suffering in favour of hardwired happiness ought to be mainstream. Today, it’s a fringe view. It’s worth asking why.

Perhaps the first scientifically-literate blueprint for a world without suffering was written by Lewis Mancini. “Brain stimulation and the genetic engineering of a world without pain” was published in the journal Medical Hypotheses in 1990. As far as I can tell, the paper sunk almost without a trace. Ignorant of Mancini’s work, I wrote The Hedonistic Imperative (HI) in 1995. I’ve plugged away at the theme ever since. Currently, a small, scattered minority of researchers believe that replacing the biology of suffering with gradients of genetically preprogrammed well-being is not just ethical but obviously so.

Alas, perceptions of obviousness vary. Technically, at least, the abolitionist project can no longer easily be dismissed as science fiction. The twenty-first century has already witnessed the decoding of the human genome, the development and imminent commercialisation of in vitro meat, the dawn of CRISPR genome-editing and the promise of synthetic gene drives. Identification of alleles and allelic combinations governing everything from pain-sensitivity to hedonic range and hedonic set-points is complementing traditional twin studies. The high genetic loading of subjective well-being and mental ill-health is being deciphered. The purely technical arguments against the genetic feasibility of creating a happy living world are shrinking. But genetic status quo bias is deeply entrenched. The sociopolitical obstacles to reprogramming the biosphere are daunting.

You ask specifically about effective altruists (EAs). Some effective altruists (cfEffective Altruism: How Can We Best Help Others? by Magnus Vinding) do explore biological-genetic solutions to complement socio-economic reform and other environmental interventions. Most don’t. Indeed, a significant minority of EAs expressly urge a nonbiological focus for EA. For example, see Why I Don’t Focus On The Hedonistic Imperative by the influential EA Brian Tomasik. I can’t offer a complete explanation, but I think these facts are relevant:

1) Timescales. Lewis Mancini reckons that completion of the abolitionist project will take thousands of years. HI predicts that the world’s last unpleasant experience will occur a few centuries hence, perhaps in some obscure marine invertebrate. If, fancifully, consensus existed for a global species-project, then 100 – 150 years (?) might be a credible forecast. Alas, such a timescale is wildly unrealistic. No such consensus exists or is plausibly in prospect. For sure, ask people a question framed on the lines of “Do you agree with Gautama Buddha, ‘May all that have life be delivered from suffering’?” and assent might be quite high. Some kind of quantified, cross-cultural study of radical Buddhist or Benthamite abolitionism would be interesting. Yet most people balk at what the scientific implementation of such a vision practically entails – if they reflect on abolitionist bioethics at all. “That’s just Brave New World” is a common response among educated Westerners to the idea of engineering “unnatural” well-being. Typically, EAs are focused on measurable results in foreseeable timeframes in areas where consensus is broad and deep, for instance the elimination of vector-borne disease. Almost everyone agrees that eliminating malaria will make the world a better place. Malaria can be eradicated this century.

2) The Hedonic Treadmill. In recent decades, popular awareness of the hedonic treadmill has grown. Sadly, most nonbiological interventions to improve well-being may not have the dramatic long-term impact we naïvely hope. However, awareness of the genetic underpinnings of the hedonic treadmill is sketchy. Knowledge of specific interventions we can plan to subvert its negative feedback mechanisms is sketchier still. Compared to more gross and visible ills, talk of “low hedonic set-points” (etc) is nebulous. Be honest, which would you personally choose if offered: a vast national lottery win (cfHow Winning The Lottery Affects Happiness) or a modestly higher hedonic set-point? Likewise, the prospect of making everyone on Earth prosperous sounds more effectively altruistic (cfCan “effective altruism” maximise the bang for each charitable buck?) than raising their hedonic defaults – even if push-button hedonic uplift were now feasible, which it isn’t, or at least not without socially unacceptable consequences.

3) The Spectre of Eugenics. Any confusion between the racial hygiene policies of the Third Reich and the project of genetically phasing out suffering in all sentient beings ought to be laughable. Nonetheless, many people recoil at the prospect of “designer babies”. Sooner or later, the ”e”-word crops up in discussions of genetic remediation and enhancement. If we assume that bioconservative attitudes to baby-making will prevail worldwide indefinitely, and the reproductive revolution extends at best only to a minority of prospective parents, then the abolitionist project will never happen. What we call the Cambrian Explosion might alternatively be classified as the Suffering Explosion. If we don’t tackle the biological-genetic roots of suffering at source – “eugenics”, if you will – then pain and suffering will proliferate until Doomsday. Without eugenics, the world’s last unpleasant experience may occur millions or even billions of years hence.

4) Core Values. Self-identified effective altruists range from ardent life loversfocused on existential risks, AGI and the hypothetical Intelligence Explosion to radical anti-natalists and negative utilitarians committed to suffering-focused ethics (cfWhat are the main differences between the anti-natalism/efilism community and the negative utilitarian/”suffering-focused ethics” wing of the effective altruism community?). There’s no inherent conflict with HI at either extreme. On the one hand, phasing out the biology of suffering can potentially minimise existential risk. Crudely, the more we love life, the more we want to preserve it. On the opposite wing of EA, radical anti-natalists oppose reproduction because they care about suffering, not because of opposition to new babies per se. Technically speaking, CRISPR babies could be little bundles of joy – as distinct from today’s tragic genetic experiments. In practice, however, life-loving EAs are suspicious of (notionally) button-pressing negative utilitarians, whereas radical anti-natalists view worldwide genetic engineering as even more improbable than their preferred option of voluntary human extinction.

5) Organisation and Leadership. Both secular and religious organizations exist whose tenets include the outright abolition of suffering. EAs can and do join such groups. However, sadly, I don’t know of a single organisation dedicated to biological-genetic solutions to the problem of suffering. Among transhumanists, for instance, radical life-extension and the prospect of posthuman superintelligence loom larger than biohappiness – though article 7 of the Transhumanist Declaration is admirably forthright: a commitment to the well-being of all sentience. Also, I think we need star power: the blessing of some charismatic billionaire or larger-than-life media celebrity. “Bill Gates says let’s use biotechnology to phase out the genetic basis of suffering” would be a breakthrough. Or even Justin Bieber.

For my part, I’m just a writer/researcher. We have our place! My guess is that this century will see more blueprints and manifestos and grandiose philosophical proposals together with concrete, incremental progress from real scientists. The genetic basis of suffering will eventually be eradicated across the tree of life, not in the name of anything “hedonistic” or gradients of intelligent bliss, and certainly not in the name of negative utilitarianism, but perhaps under the label of the World Health Organisation’s definition of health (cfConstitution of WHO: principles). Taken literally, the constitution of the WHO enshrines the most daringly ambitious vision of the future of sentience ever conceived. Lifelong good health (“complete physical, mental and social well-being”) for all sentient beings is a noble aspiration. Regardless of race or species, all of us deserve good health as so defined. A biology of information-sensitive gradients of physical, mental and social well-being (HI) is more modest and workable thanks to biotech. Optimistically, life on Earth has only a few more centuries of misery and malaise to go.

The Appearance of Arbitrary Contingency to Our Diverse Qualia

By David Pearce (Mar 21, 2012; Reddit AMA)

 

The appearance of arbitrary contingency to our diverse qualia – and undiscovered state-spaces of posthuman qualia and hypothetical micro-qualia – may be illusory. Perhaps they take the phenomenal values they do as a matter of logico-mathematical necessity. I’d make this conjecture against the backdrop of some kind of zero ontology. Intuitively, there seems no reason for anything at all to exist. The fact that the multiverse exists (apparently) confounds one’s pre-reflective intuitions in the most dramatic possible way. However, this response is too quick. The cosmic cancellation of the conserved constants (mass-energy, charge, angular momentum) to zero, and the formal equivalence of zero information to all possible descriptions [the multiverse?] means we have to take seriously this kind of explanation-space. The most recent contribution to the zero-ontology genre is physicist Lawrence Krauss’s readable but frustrating “A Universe from Nothing: Why There Is Something Rather Than Nothing“. Anyhow, how does a zero ontology tie in with (micro-)qualia? Well, if the solutions to the master equation of physics do encode the field-theoretic values of micro-qualia, then perhaps their numerically encoded textures “cancel out” to zero too. To use a trippy, suspiciously New-Agey-sounding metaphor, imagine the colours of the rainbow displayed as a glorious spectrum – but on recombination cancelling out to no colour at all. Anyhow, I wouldn’t take any of this too seriously: just speculation on idle speculation. It’s tempting simply to declare the issue of our myriad qualia to be an unfathomable mystery. And perhaps it is. But mysterianism is sterile.

The Banality of Evil

In response to the Quora question “I feel like a lot of evil actions in the world have supporters who justify them (like Nazis). Can you come up with some convincing ways in which some of the most evil actions in the world could be justified?David Pearce writes:


Tout comprendre, c’est tout pardonner.”
(Leo Tolstoy, War and Peace)

Despite everything, I believe that people are really good at heart.
(Anne Frank)

The risk of devising justifications of the worst forms of human behaviour is there are people gullible enough to believe them. It’s not as though anti-Semitism died with the Third Reich. Even offering dispassionate causal explanation can sometimes be harmful. So devil’s advocacy is an intellectual exercise to be used sparingly.

That said, the historical record suggests that human societies don’t collectively set out to do evil. Rather, primitive human emotions get entangled with factually mistaken beliefs and ill-conceived metaphysics with ethically catastrophic consequences. Thus the Nazis seriously believed in the existence of an international Jewish conspiracy against the noble Aryan race. Hitler, so shrewd in many respects, credulously swallowed The Protocols of the Elders of Zion. And as his last testament disclosed, obliquely, Hitler believed that the gas chambers were a “more humane means” than the terrible fate befalling the German Volk. Many Nazis (HimmlerHössStangl, and maybe even Eichmann) believed that they were acting from a sense of duty – a great burden stoically borne. And such lessons can be generalised across history. If you believed, like the Inquisition, that torturing heretics was the only way to save their souls from eternal damnation in Hell, would you have the moral courage to do likewise? If you believed that the world would be destroyed by the gods unless you practised mass human sacrifice, would you participate? [No, in my case, albeit for unorthodox reasons.]

In a secular context today, there exist upstanding citizens who would like future civilisation to run “ancestor simulations”. Ancestor simulations would create inconceivably more suffering than any crime perpetrated by the worst sadist or deluded ideologue in history – at least if the computational-functional theory of consciousness assumed by their proponents is correct. If I were to pitch a message to life-lovers aimed at justifying such a monstrous project, as you request, then I guess I’d spin some yarn about how marvellous it would be to recreate past wonders and see grandpa again.
And so forth.

What about the actions of individuals, as distinct from whole societies? Not all depraved human behaviour stems from false metaphysics or confused ideology. The grosser forms of human unpleasantness often stem just from our unreflectively acting out baser appetites (cfHamiltonian spite). Consider the neuroscience of perception. Sentient beings don’t collectively perceive a shared public world. Each of us runs an egocentric world-simulation populated by zombies (sic). We each inhabit warped virtual worlds centered on a different body-image, situated within a vast reality whose existence can be theoretically inferred. Or so science says. Most people are still perceptual naïve realists. They aren’t metaphysicians, or moral philosophers, or students of the neuroscience of perception. Understandably, most people trust the evidence of their own eyes and the wisdom of their innermost feelings, over abstract theory. What “feels right” is shaped by natural selection. And what “feels right” within one’s egocentric virtual world is often callous and sometimes atrocious. Natural selection is amoral. We are all slaves to the pleasure-pain axis, however heavy the layers of disguise. Thanks to evolution, our emotions are “encephalised” in grotesque ways. Even the most ghastly behaviour can be made to seem natural –like Darwinian life itself.

Are there some forms of human behaviour so appalling that I’d find it hard to play devil’s advocate in their mitigation – even as an intellectual exercise?

Well, perhaps consider, say, the most reviled hate-figures in our society – even more reviled than murderers or terrorists. Most sexually active paedophiles don’t set out to harm children: quite the opposite, harm is typically just the tragic by-product of a sexual orientation they didn’t choose. Posthumans may reckon that all Darwinian relationships are toxic. Of course, not all monstrous human behavior stems from wellsprings as deep as sexual orientation. Thus humans aren’t obligate carnivores. Most (though not all) contemporary meat eaters, if pressed, will acknowledge in the abstract that a pig is as sentient and sapient as a prelinguistic human toddler. And no contemporary meat eaters seriously believe that their victims have committed a crime (cfAnimal trial – Wikipedia). Yet if questioned why they cause such terrible suffering to the innocent, and why they pay for a hamburger rather than a veggieburger, a meat eater will come up with perhaps the lamest justification for human depravity ever invented:

“But I like the taste!”

Such is the banality of evil.

The Universal Plot: Part I – Consciousness vs. Pure Replicators

“It seems plain and self-evident, yet it needs to be said: the isolated knowledge obtained by a group of specialists in a narrow field has in itself no value whatsoever, but only in its synthesis with all the rest of knowledge and only inasmuch as it really contributes in this synthesis toward answering the demand, ‘Who are we?'”

– Erwin Schrödinger in Science and Humanism (1951)

 

“Should you or not commit suicide? This is a good question. Why go on? And you only go on if the game is worth the candle. Now, the universe is been going on for an incredibly long time. Really, a satisfying theory of the universe should be one that’s worth betting on. That seems to me to be absolutely elementary common sense. If you make a theory of the universe which isn’t worth betting on… why bother? Just commit suicide. But if you want to go on playing the game, you’ve got to have an optimal theory for playing the game. Otherwise there’s no point in it.”

Alan Watts, talking about Camu’s claim that suicide is the most important question (cf. The Most Important Philosophical Question)

In this article we provide a novel framework for ethics which focuses on the perennial battle between wellbeing-oriented consciousness-centric values and valueless patterns who happen to be great at making copies of themselves (aka. Consciousness vs. Pure Replicators). This framework extends and generalizes modern accounts of ethics and intuitive wisdom, making intelligible numerous paradigms that previously lived in entirely different worlds (e.g. incongruous aesthetics and cultures). We place this worldview within a novel scale of ethical development with the following levels: (a) The Battle Between Good and Evil, (b) The Balance Between Good and Evil, (c) Gradients of Wisdom, and finally, the view that we advocate: (d) Consciousness vs. Pure Replicators. More so, we analyze each of these worldviews in light of our philosophical background assumptions and posit that (a), (b), and (c) are, at least in spirit, approximations to (d), except that they are less lucid, more confused, and liable to exploitation by pure replicators. Finally, we provide a mathematical formalization of the problem at hand, and discuss the ways in which different theories of consciousness may affect our calculations. We conclude with a few ideas for how to avoid particularly negative scenarios.

Introduction

Throughout human history, the big picture account of the nature, purpose, and limits of reality has evolved dramatically. All religions, ideologies, scientific paradigms, and even aesthetics have background philosophical assumptions that inform their worldviews. One’s answers to the questions “what exists?” and “what is good?” determine the way in which one evaluates the merit of beings, ideas, states of mind, algorithms, and abstract patterns.

Kuhn’s claim that different scientific paradigms are mutually unintelligible (e.g. consciousness realism vs. reductive eliminativism) can be extended to worldviews in a more general sense. It is unlikely that we’ll be able to convey the Consciousness vs. Pure Replicators paradigm by justifying each of the assumptions used to arrive to it one by one starting from current ways of thinking about reality. This is because these background assumptions support each other and are, individually, not derivable from current worldviews. They need to appear together as a unit to hang together tight. Hence, we now make the jump and show you, without further due, all of the background assumptions we need:

  1. Consciousness Realism
  2. Qualia Formalism
  3. Valence Structuralism
  4. The Pleasure Principle (and its corollary The Tyranny of the Intentional Object)
  5. Physicalism (in the causal sense)
  6. Open Individualism (also compatible with Empty Individualism)
  7. Universal Darwinism

These assumptions have been discussed in previous articles. In the meantime, here is a brief description: (1) is the claim that consciousness is an element of reality rather than simply the improper reification of illusory phenomena, such that your conscious experience right now is as much a factual and determinate aspect of reality as, say, the rest mass of an electron. In turn, (2) qualia formalism is the notion that consciousness is in principle quantifiable. Assumption (3) states that valence (i.e. the pleasure/pain axis, how good an experience feels) depends of the structure of such experience (more formally, on the properties of the mathematical object isomorphic to its phenomenology).

(4) is the assumption that people’s behavior is motivated by the pleasure-pain axis even when they think that’s not the case. For instance, people may explicitly represent the reason for doing things in terms of concrete facts about the circumstance, and the pleasure principle does not deny that such reasons are important. Rather, it merely says that such reasons are motivating because one expects/anticipates less negative valence or more positive valence. The Tyranny of the Intentional Object describes the fact that we attribute changes in our valence to external events and objects, and believe that such events and objects are intrinsically good (e.g. we think “icecream is great” rather than “I feel good when I eat icecream”).

Physicalism (5) in this context refers to the notion that the equations of physics fully describe the causal behavior of reality. In other words, the universe behaves according to physical laws and even consciousness has to abide by this fact.

Open Individualism (6) is the claim that we are all one consciousness, in some sense. Even though it sounds crazy at first, there are rigorous philosophical arguments in favor of this view. Whether this is true or not is, for the purpose of this article, less relevant than the fact that we can experience it as true, which happens to have both practical and ethical implications for how society might evolve.

Finally, (7) Universal Darwinism refers to the claim that natural selection works at every level of organization. The explanatory power of evolution and fitness landscapes generated by selection pressures is not confined to the realm of biology. Rather, it is applicable all the way from the quantum foam to, possibly, an ecosystem of universes.

The power of a given worldview is not only its capacity to explain our observations about the inanimate world and the quality of our experience, but also in its capacity to explain *in its own terms* the reasons for why other worldviews are popular as well. In what follows we will utilize these background assumptions to evaluate other worldviews.

 

The Four Worldviews About Ethics

The following four stages describe a plausible progression of thoughts about ethics and the question “what is valuable?” as one learns more about the universe and philosophy. Despite the similarity of the first three levels to the levels of other scales of moral development (e.g. this, this, this, etc.), we believe that the fourth level is novel, understudied, and very, very important.

1. The “Battle Between Good and Evil” Worldview

“Every distinction wants to become the distinction between good and evil.” – Michael Vassar (source)

Common-sensical notions of essential good and evil are pre-scientific. For reasons too complicated to elaborate on for the time being, the human mind is capable of evoking an agentive sense of ultimate goodness (and of ultimate evil).

maxresdefault

Good vs. Evil? God vs. the Devil?

Children are often taught that there are good people and bad people. That evil beings exist objectively, and that it is righteous to punish them and see them with scorn. On this level people reify anti-social behaviors as sins.

Essentializing good and evil, and tying it up to entities seems to be an early developmental stage of people’s conception of ethics, and many people end up perpetually stuck in here. Several religions (specially the Abrahamic ones) are often practiced in such a way so as to reinforce this worldview. That said, many ideologies take advantage of the fact that a large part of the population is at this level to recruit adherents by redefining “what good and bad is” according to the needs of such ideologies. As a psychological attitude (rather than as a theory of the universe), reactionary and fanatical social movements often rely implicitly on this way of seeing the world, where there are bad people (jews, traitors, infidels, over-eaters, etc.) who are seen as corrupting the soul of society and who deserve to have their fundamental badness exposed and exorcised with punishment in front of everyone else.

15d8a1999197da27374e911b1cba769a--satan-god-is-good

Traditional notions of God vs. the Devil can be interpreted as the personification of positive and negative valence

Implicitly, this view tends to gain psychological strength from the background assumptions of Closed Individualism (which allows you to imagine that people can be essentially bad). Likewise, this view tends to be naïve about the importance of valence in ethics. Good feelings are often interpreted as the result of being aligned with fundamental goodness, rather than as positive states of consciousness that happen to be triggered by a mix of innate and programmable things (including cultural identifications). More so, good feelings that don’t come in response to the preconceived universal order are seen as demonic and aberrant.

From our point of view (the 7 background assumptions above) we interpret this particular worldview as something that we might be biologically predisposed to buy into. Believing in the battle between good and evil was probably evolutionarily adaptive in our ancestral environment, and might reduce many frictional costs that arise from having a more subtle view of reality (e.g. “The cheaper people are to model, the larger the groups that can be modeled well enough to cooperate with them.” – Michale Vassar). Thus, there are often pragmatic reasons to adopt this view, specially when the social environment does not have enough resources to sustain a more sophisticated worldview. Additionally, at an individual level, creating strong boundaries around what is or not permissible can be helpful when one has low levels of impulse control (though it may come at the cost of reduced creativity).

On this level, explicit wireheading (whether done right or not) is perceived as either sinful (defying God’s punishment) or as a sort of treason (disengaging from the world). Whether one feels good or not should be left to the whims of the higher order. On the flipside, based on the pleasure principle it is possible to interpret the desire to be righteous as being motivated by high valence states, and reinforced by social approval, all the while the tyranny of the intentional object cloaks this dynamic.

It’s worth noting that cultural conservativism, low levels of the psychological constructs of Openness to Experience and Tolerance of Ambiguity , and high levels of Need for Closure, all predict getting stuck in this worldview for one’s entire life.

2. The “Balance Between Good and Evil” Worldview

TVTropes has a great summary of the sorts of narratives that express this particular worldview and I highly recommend reading that article to gain insight into the moral attitudes compatible with this view. For example, here are some reasons why Good cannot or should not win:

Good winning includes: the universe becoming boring, society stagnating or collapsing from within in the absence of something to struggle against or giving people a chance to show real nobility and virtue by risking their lives to defend each other. Other times, it’s enforced by depicting ultimate good as repressive (often Lawful Stupid), or by declaring concepts such as free will or ambition as evil. In other words “too much of a good thing”.

Balance Between Good and Evil by tvtropes

Now, the stated reasons why people might buy into this view are rarely their true reasons. Deep down, the Balance Between Good and Evil is adopted because: people want to differentiate themselves from those who believe in (1) to signal intellectual sophistication, they experience learned helplessness after trying to defeat evil without success (often in the form of resilient personal failings or societal flaws), they find the view compelling at an intuitive emotional level (i.e. they have internalized the hedonic treadmill and project it onto the rest of reality).

In all of these cases, though, there is something somewhat paradoxical about holding this view. And that is that people report that coming to terms with the fact that not everything can be good is itself a cause of relief, self-acceptance, and happiness. In other words, holding this belief is often mood-enhancing. One can also confirm the fact that this view is emotionally load-bearing by observing the psychological reaction that such people have to, for example, bringing up the Hedonistic Imperative (which asserts that eliminating suffering without sacrificing anything of value is scientifically possible), indefinite life extension, or the prospect of super-intelligence. Rarely are people at this level intellectually curious about these ideas, and they come up with excuses to avoid looking at the evidence, however compelling it may be.

For example, some people are lucky enough to be born with a predisposition to being hyperthymic (which, contrary to preconceptions, does the opposite of making you a couch potato). People’s hedonic set-point is at least partly genetically determined, and simply avoiding some variants of the SCN9A gene with preimplantation genetic diagnosis would greatly reduce the number of people who needlessly suffer from chronic pain.

But this is not seen with curious eyes by people who hold this or the previous worldview. Why? Partly this is because it would be painful to admit that both oneself and others are stuck in a local maxima of wellbeing and that examining alternatives might yield very positive outcomes (i.e. omission bias). But at its core, this willful ignorance can be explained as a consequence of the fact that people at this level get a lot of positive valence from interpreting present and past suffering in such a way that it becomes tied to their core identity. Pride in having overcome their past sufferings, and personal attachment to their current struggles and anxieties binds them to this worldview.

If it wasn’t clear from the previous paragraph, this worldview often requires a special sort of chronic lack of self-insight. It ultimately relies on a psychological trick. One never sees people who hold this view voluntarily breaking their legs, taking poison, or burning their assets to increase the goodness elsewhere as an act of altruism. Instead, one uses this worldview as a mood-booster, and in practice, it is also susceptible to the same sort of fanaticism as the first one (although somewhat less so). “There can be no light without the dark. And so it is with magic. Myself, I always try to live within the light.” – Horace Slughorn.

315eab27545ea96c67953c54358fe600Additionally, this view helps people rationalize the negative aspects of one’s community and culture. For example, it not uncommon for people to say that buying factory farmed meat is acceptable on the grounds that “some things have to die/suffer for others to live/enjoy life.” Balance Between Good and Evil is a close friend of status quo bias.

Hinduism, Daoism, and quite a few interpretations of Buddhism work best within this framework. Getting closer to God and ultimate reality is not done by abolishing evil, but by embracing the unity of all and fostering a healthy balance between health and sickness.

It’s also worth noting that the balance between good and evil tends to be recursively applied, so that one is not able to “re-define our utility function from ‘optimizing the good’ to optimizing ‘the balance of good and evil’ with a hard-headed evidence-based consequentialist approach.” Indeed, trying to do this is then perceived as yet another incarnation of good (or evil) which needs to also be balanced with its opposite (willful ignorance and fuzzy thinking). One comes to the conclusion that it is the fuzzy thinking itself that people at this level are after: to blur reality just enough to make it seem good, and to feel like one is not responsible for the suffering in the world (specially by inaction and lack of thinking clearly about how one could help). “Reality is only a Rorschach ink-blot, you know” – Alan Watts. So this becomes a justification for thinking less than one really has to about the suffering in the world. Then again, it’s hard to blame people for trying to keep the collective standards of rigor lax, given the high proportion of fanatics who adhere to the “battle between good and evil” worldview, and who will jump the gun to demonize anyone who is slacking off and not stressed out all the time, constantly worrying about the question “could I do more?”

(Note: if one is actually trying to improve the world as much as possible, being stressed out about it all the time is not the right policy).

3. The “Gradients of Wisdom” Worldview

David Chapman’s HTML book Meaningness might describe both of the previous worldviews as variants of eternalism. In the context of his work, eternalism refers to the notion that there is an absolute order and meaning to existence. When applied to codes of conduct, this turns into “ethical eternalism”, which he defines as: “the stance that there is a fixed ethical code according to which we should live. The eternal ordering principle is usually seen as the source of the code.” Chapman eloquently argues that eternalism has many side effects, including: deliberate stupidity, attachment to abusive dynamics, constant disappointment and self-punishment, and so on. By realizing that, in some sense, no one knows what the hell is going on (and those who do are just pretending) one takes the first step towards the “Gradients of Wisdom” worldview.

At this level people realize that there is no evil essence. Some might talk about this in terms of there “not being good or bad people”, but rather just degrees of impulse control, knowledge about the world, beliefs about reality, emotional stability, and so on. A villain’s soul is not connected to some kind of evil reality. Rather, his or her actions can be explained by the causes and conditions that led to his or her psychological make-up.

Sam Harris’ ideas as expressed in The Moral Landscape evoke this stage very clearly. Sam explains that just as health is a fuzzy but important concept, so is psychological wellbeing, and that for such a reason we can objectively assess cultures as more or less in agreement with human flourishing. the-science-of-morality-7-728

Indeed, many people who are at this level do believe in valence structuralism, where they recognize that there are states of consciousness that are inherently better in some intrinsic subjective value sense than others.

However, there is usually no principled framework to assess whether a certain future is indeed optimal or not. There is little hard-headed discussion of population ethics for fear of sounding unwise or insensitive. And when push comes to shove, they lack good arguments to decisively rule out why particular situations might be bad. In other words, there is room for improvement, and such improvement might eventually come from more rigor and bullet-bitting.  In particular, a more direct examination of the implications of: Open Individualism, the Tyranny of the Intentional Object, and Universal Darwinism can allow someone on this level to make a breakthrough. Here is where we come to:

4. The “Consciousness vs. Pure Replicators” Worldview

In Wireheading Done Right we introduced the concept of a pure replicator:

I will define a pure replicator, in the context of agents and minds, to be an intelligence that is indifferent towards the valence of its conscious states and those of others. A pure replicator invests all of its energy and resources into surviving and reproducing, even at the cost of continuous suffering to themselves or others. Its main evolutionary advantage is that it does not need to spend any resources making the world a better place.

Presumably our genes are pure replicators. But we, as sentient minds who recognize the intrinsic value (both positive and negative) of conscious experiences, are not pure replicators. Thanks to a myriad of fascinating dynamics, it so happened that making minds who love, appreciate, think creatively, and philosophize was a side effect of the process of refining the selfishness of our genes. We must not take for granted that we are more than pure replicators ourselves, and that we care both about our wellbeing and the wellbeing of others. The problem now is that the particular selection pressures that led to this may not be present in the future. After all, digital and genetic technologies are drastically changing the fitness landscape for patterns that are good at making copies of themselves.

In an optimistic scenario, future selection pressures will make us all naturally gravitate towards super-happiness. This is what David Pearce posits in his essay “The Biointelligence Explosion”:

As the reproductive revolution of “designer babies” gathers pace, prospective parents will pre-select alleles and allelic combinations for a new child in anticipation of their behavioural effects – a novel kind of selection pressure to replace the “blind” genetic roulette of natural selection. In time, routine embryo screening via preimplantation genetic diagnosis will be complemented by gene therapy, genetic enhancement and then true designer zygotes. In consequence, life on Earth will also become progressively happier as the hedonic treadmill is recalibrated. In the new reproductive era, hedonic set-points and intelligence alike will be ratcheted upwards in virtue of selection pressure. For what parent-to-be wants to give birth to a low-status depressive “loser”? Future parents can enjoy raising a normal transhuman supergenius who grows up to be faster than Usain Bolt, more beautiful than Marilyn Monroe, more saintly than Nelson Mandela, more creative than Shakespeare – and smarter than Einstein.

– David Pearce in The Biointelligence Explosion

In a pessimistic scenario, the selection pressures lead to the opposite direction, where negative experiences are the only states of consciousness that happen to be evolutionarily adaptive, and so they become universally used.

There is a number of thinkers and groups who can be squarely placed on this level, and relative to the general population, they are extremely rare (see: The Future of Human Evolution,  A Few Dystopic Future Scenarios,  Book Review: Age of EM, Nick Land’s Gnon, Spreading Happiness to the Stars Seems Little Harder than Just Spreading, etc.). See also**. What is much needed now, is formalizing the situation and working out what we could do about it. But first, some thoughts about the current state of affairs.

There is at least some encouraging facts that suggest it is not too late to prevent a pure replicator takeover. There are memes, states of consciousness, and resources that can be used in order to steer evolution in a positive directions. In particular, as of 2017:

  1. A very big proportion of the economy is dedicated to trading positive experiences for money, rather than just survival or power tools. Thus an economy of information about states of consciousness is still feasible.
  2. There is a large fraction of the population who is altruistic and would be willing to cooperate with the rest of the world to avoid catastrophic scenarios.
  3. Happy people are more motivated, productive, engaged, and ultimately, economically useful (see hyperthimic temperament).
  4. Many people have explored Open Individualism and are interested (or at least curious) about the idea that we are all one.
  5. A lot of people are fascinated by psychedelics and the non-ordinary states of consciousness that they induce.
  6. MDMA-like consciousness is both very positive in terms of its valence, but also, amazingly, extremely pro-social, and future sustainable versions of it could be recruited to stabilize societies where the highest value is the collective wellbeing.

It is important to not underestimate the power of the facts laid out above. If we get our act together and create a Manhattan Project of Consciousness we might be able to find sustainable, reliable, and powerful methods that stabilize a hyper-motivated, smart, super-happy and super-prosocial state of consciousness in a large fraction of the population. In the future, we may all by default identify with consciousness itself rather than with our bodies (or our genes), and be intrinsically (and rationally) motivated to collaborate with everyone else to create as much happiness as possible as well as to eradicate suffering with technology. And if we are smart enough, we might also be able to solidify this state of affairs, or at least shield it against pure replicator takeovers.

The beginnings of that kind of society may already be underway. Consider for example the contrast between Burning Man and Las Vegas. Burning Man is a place that works as a playground for exploring post-Darwinean social dynamics, in which people help each other overcome addictions and affirm their commitment to helping all of humanity. Las Vegas, on the other hand, might be described as a place that is filled to the top with pure replicators in the forms of memes, addictions, and denial. The present world has the potential for both kind of environments, and we do not yet know which one will outlive the other in the long run.

Formalizing the Problem

We want to specify the problem in a way that will make it mathematically intelligible. In brief, in this section we focus on specifying what it means to be a pure replicator in formal terms. Per the definition, we know that pure replicators will use resources as efficiently as possible to make copies of themselves, and will not care about the negative consequences of their actions. And in the context of using brains, computers, and other systems whose states might have moral significance (i.e. they can suffer), they will simply care about the overall utility of such systems for whatever purpose they may require. Such utility will be a function of both the accuracy with which the system performs it’s task, as well as its overall efficiency in terms of resources like time, space, and energy.

Simply phrased, we want to be able to answer the question: Given a certain set of constraints such as energy, matter, and physical conditions (temperature, radiation, etc.), what is the amount of pleasure and pain involved in the most efficient implementation of a given predefined input-output mapping?

system_specifications

The image above represents the relevant components of a system that might be used for some purpose by an intelligence. We have the inputs, the outputs, the constraints (such as temperature, materials, etc.) and the efficiency metrics. Let’s unpack this. In the general case, an intelligence will try to find a system with the appropriate trade-off between efficiency and accuracy. We can wrap up this as an “efficiency metric function”, e(o|i, s, c) which encodes the following meaning: “e(o|i, s, c) = the efficiency with which a given output is generated given the input, the system being used, and the physical constraints in place.”

basic_system

Now, we introduce the notion of the “valence for the system given a particular input” (i.e. the valence for the system’s state in response to such an input). Let’s call this v(s|i). It is worth pointing out that whether valence can be computed, and whether it is even a meaningfully objective property of a system is highly controversial (e.g. “Measuring Happiness and Suffering“). Our particular take (at QRI) is that valence is a mathematical property that can be decoded from the mathematical object whose properties are isomorphic to a system’s phenomenology (see: Principia Qualia: Part II – Valence, and also Quantifying Bliss). If so, then there is a matter of fact about just how good/bad an experience is. For the time being we will assume that valence is indeed quantifiable, given that we are working under the premise of valence structuralism (as stated in our list of assumptions). We thus define the overall utility for a given output as U(e(o|i, s, c), v(s|i)), where the valence of the system may or may not be taken into account. In turn, an intelligence is said to be altruistic if it cares about the valence of the system in addition to its efficiency, so that it’s utility function penalizes negative valence (and rewards positive valence).

valence_altruism

Now, the intelligence (altruistic or not) utilizing the system will also have to take into account the overall range of inputs the system will be used to process in order to determine how valuable the system is overall. For this reason, we define the expected value of the system as the utility of each input multiplied by its probability.

input_probabilities

(Note: a more complete formalization would also weight in the importance of each input-output transformation, in addition to their frequency). Moving on, we can now define the overall expected utility for the system given the distribution of inputs it’s used for, its valence, its efficiency metrics, and its constraints as E[U(s|v, e, c, P(I))]:

chosen_system

The last equation shows that the intelligence would choose the system that maximizes E[U(s|v, e, c, P(I))].

Pure replicators will be better at surviving as long as the chances of reproducing do not depend on their altruism. If altruism does not reduce such reproductive fitness, then:

Given two intelligences that are competing for existence and/or resources to make copies of themselves and fight against other intelligences, there is going to be a strong incentive to choose a system that maximizes the efficiency metrics regardless of the valence of the system.

In the long run, then, we’d expect to see only non-altruistic intelligences (i.e. intelligences with utility functions that are indifferent to the valence of the systems it uses to process information). In other words, as evolution pushes intelligences to optimize the efficiency metrics of the systems they employ, it also pushes them to stop caring about the wellbeing of such systems. In other words, evolution pushes intelligences to become pure replicators in the long run.

Hence we should ask: How can altruism increase the chances of reproduction? A possibility would be for the environment to reward entities that are altruistic. Unfortunately, in the long run we might see that environments that reward altruistic entities produce less efficient entities than environments that don’t. If there are two very similar environments, one which rewards altruism and one which doesn’t, the efficiency of the entities in the latter might become so much higher than in the former that they become able to takeover and destroy whatever mechanism is implementing such reward for altruism in the former. Thus, we suggest to find environments in which rewarding altruism is baked into their very nature, such that similar environments without such reward either don’t exist or are too unstable to exist for the amount of time it takes to evolve non-altruistic entities. This and other similar approaches will be explored further in Part II.

Behaviorism, Functionalism, Non-Materialist Physicalism

A key insight is that the formalization presented above is agnostic about one’s theory of consciousness. We are simply assuming that it’s possible to compute the valence of the system in terms of its state. How one goes about computing such valence, though, will depend on how one maps physical systems to experiences. Getting into the weeds of the countless theories of consciousness out there would not be very productive at this stage, but there is still value in defining the rough outline of kinds of theories of consciousness. In particular, we categorize (physicalist) theories of consciousness in terms of the level of abstraction they identify as the place in which to look for consciousness.

Behaviorism and similar accounts simply associate consciousness to input-output mappings, which can be described, in Marr’s terms, as the computational level of abstraction. In this case, v(s|i) would not depend on the details of the system as much as in what it does from a third person point of view. Behaviorists don’t care what’s in the Chinese Room; all they care about is if the Chinese Room can scribble “I’m in pain” as an output. How we can formalize a mathematical equation to infer whether a system is suffering from a behaviorist point of view is beyond me, but maybe someone might want to give it a shot. As a side note, behaviorists historically were not very concerned about pain or pleasure, and there cause to believe that behaviorism itself might be anti-depressant for people for whom introspection results in more pain than pleasure.

Functionalism (along with computational theories of mind) defines consciousness as the sum-total of the functional properties of systems. In turn, this means that consciousness arises at the algorithmic level of abstraction. Contrary to common misconception, functionalists do care about how the Chinese Room is implemented: contra behaviorists, they do not usually agree that a Chinese Room implemented with a look-up table is conscious.*

As such v(s|i) will depend on the algorithms that the system is implementing. Thus, as an intermediary step, one would need a function that takes the system as an input and returns the algorithms that the system is implementing as an output, A(s). Only once we have A(s) we would then be able to infer the valence of the system. Which algorithms, and for what reason, are in fact hedonically-charged has yet to be clarified. Committed functionalists often associate reinforcement learning with pleasure and pain, and one could imagine that as philosophy of mind gets more rigorous and takes into account more advancements in neuroscience and AI, we will see more hypothesis being made about what kinds of algorithms result in phenomenal pain (and pleasure). There are many (still fuzzy) problems to be solved for this account to work even in principle. Indeed, there is a reason to believe that the question “what algorithms is this system performing?” has no definite answer, and it surely isn’t frame-invariant in the same way that a physical state might be. The fact that algorithms do not carve nature at its joints would imply that consciousness is not really a well-defined element of reality either. But rather than this working as a reductio-ad-absurdum of functionalism, many of its proponents have instead turned around to conclude that consciousness itself is not a natural kind. This does represent an important challenge in order to define the valence of the system, and makes the problem of detecting and avoiding pure replicators extra challenging. Admirably so, this is not stopping some from trying anyway.

We also should note that there are further problems with functionalism in general, including the fact that qualia, the binding problem, and the causal role of consciousness seem underivable from its premises. For a detailed discussion about this, read this article.

Finally, Non-Materialist Physicalism locates consciousness at the implementation level of abstraction. This general account of consciousness refers to the notion that the intrinsic nature of the physical is qualia. There are many related views that for the purpose of this article should be good enough approximations: panpsychism, panexperientialism, neutral monism, Russellian monism, etc. Basically, this view takes seriously both the equations of physics and the idea that what they describe is the behavior of qualia. A big advantage of this view is that there is a matter-of-fact about what a system is composed of. Indeed, both in relativity and quantum mechanics, the underlying nature of a system is frame-invariant, such that its fundamental (intrinsic and causal) properties do not depend on one’s frame of reference. In order to obtain v(s|i) we will need to obtain this frame-invariant description of what the system is in a given state. Thus, we need a function that takes as input physical measurements of the system and returns the best possible approximation to what is actually going on under the hood, Ph(s). And only with this function Ph(s) we would be ready to compute the valence of the system. Now, in practice we might not need a plank-length description of the system, since the mathematical property that describes it’s valence might turn out to be well-approximated with high-level features of it.

The main problem with Non-Materialist Physicalism comes when one considers systems that have similar efficiency metrics, are performing the same algorithms, and look the same in all of the relevant respects from a third-person point, and yet do not have the same experience. In brief: if physical rather than functional aspects of systems map to conscious experiences, it seems likely that we could find two systems that do the same (input-output mapping), do it in the same way (algorithms), and yet one is conscious and the other isn’t.

This kind of scenario is what has pushed many to conclude that functionalism is the only viable alternative, since at this point consciousness would seem epiphenomenal (e.g. Zombies Redacted). And indeed, if this was the case, it would seem to be a mere matter of chance that our brains are implemented with the right stuff to be conscious, since the nature of such stuff is not essential to the algorithms that actually end up processing the information. You cannot speak to stuff, but you can speak to an algorithm. So how do we even know we have the right stuff to be conscious?

The way to respond to this very valid criticism is for Non-Materialist Physicalism to postulate that bound states of consciousness have computational properties. In brief, epiphenomenalism cannot be true. But this does not rule out Non-Materialist Physicalism for the simple reason that the quality of states of consciousness might be involved in processing information. Enter…

The Computational Properties of Consciousness

Let’s leave behaviorism behind for the time being. In what ways do functionalism and non-materialist physicalism differ in the context of information processing? In the former, consciousness is nothing other than certain kinds of information processing, whereas in the latter conscious states can be used for information processing. An example of this falls out of taking David Pearce’s theory of consciousness seriously. In his account, the phenomenal binding problem (i.e. “if we are made of atoms, how come our experience contains many pieces of information at once?”, see: The Combination Problem for Panpsychism) is solved via quantum coherence. Thus, a given moment of consciousness is a definite physical system that works as a unit. Conscious states are ontologically unitary, and not merely functionally unitary.

If this is the case, there would be a good reason for evolution to recruit conscious states to process information. Simply put, given a set of constraints, using quantum coherence might be the most efficient way to solve some computational problems. Thus, evolution might have stumbled upon a computational jackpot by creating neurons whose (extremely) fleeting quantum coherence could be used to solve constraint satisfaction problems in ways that would be more energetically expensive to do otherwise. In turn, over many millions of years, brains got really good at using consciousness in order to efficiently process information. It is thus not an accident that we are conscious, that our conscious experiences are unitary, that our world-simulations use a wide range of qualia varieties, and so on. All of these seemingly random, seemingly epiphenomenal, aspects of our existence happen to be computationally advantageous. Just as using quantum computing for factorizing prime numbers, or for solving problems amenable to annealing might give quantum computers a computational edge over their non-quantum counterparts, so is using bound conscious experiences helpful to outcompete non-sentient animals.

Of course, there is yet no evidence of macroscopic decoherence and the brain is too hot anyway, so on the face of it Pearce’s theory seems exceedingly unlikely. But its explanatory power should not be dismissed out of hand, and the fact that it makes empirically testable predictions is noteworthy (how often do consciousness theorists make precise predictions to falsify their theories?).

Whether it is via quantum coherence, entanglement, invariants of the gauge field, or any other deep physical property of reality, non-materialist physicalism can avert the spectre of epiphenomenalism by postulating that the relevant properties of matter that make us conscious are precisely those that give our brains a computational edge (relative to what evolution was able to find in the vicinity of the fitness landscape explored in our history).

Will Pure Replicators Use Valence Gradients at All?

Whether we work under the assumption of functionalism or non-materialist physicalism, we already know that our genes found happiness and suffering to be evolutionary advantageous. So we know that there is at least a set of constraints, efficiency metrics, and input-output mappings that make both phenomenal pleasure and pain very good algorithms (functionalism) or physical implementations (non-materialist physicalism). But will the parameters necessitated by replicators in the long-term future have these properties? Remember that evolution was only able to explore a restricted state-space of possible brain implementations delimited by the pre-existing gene pool (and the behavioral requirements provided by the environment). So, in one extreme case, it may be the case that a fully optimized brain simply does not need consciousness to solve problems. And in another extreme, it may turn out that consciousness is extraordinarily more powerful when used in an optimal way. Would this be good or bad?

What’s the best case scenario? Well, the absolute best possible case is a case so optimistic and incredibly lucky that if it turned out to be true, it would probably make me believe in a benevolent God (or Simulation). This is the case where it turns out that only positive valence gradients are computationally superior to every other alternative given a set of constraints, input-output mappings, and arbitrary efficiency functions. In this case, the most powerful pure replicators, despite their lack of altruism, will nonetheless be pumping out massive amounts of systems that produce unspeakable levels of bliss. It’s as if the very nature of this universe is blissful… we simply happen to suffer because we are stuck in a tiny wrinkle at the foothills of the optimization process of evolution.

In the extreme opposite case, it turns out that only negative valence gradients offer strict computational benefits under heavy optimization. This would be Hell. Or at least, it would tend towards Hell in the long run. If this happens to be the universe we live in, let’s all agree to either conspire to prevent evolution from moving on, or figure out the way to turn it off. In the long term, we’d expect every being alive (or AI, upload, etc.) to be a zombie or a piece of dolorium. Not a fun idea.

In practice, it’s much more likely that both positive and negative valence gradients will be of some use in some contexts. Figuring out exactly which contexts these are might be both extremely important, and also extremely dangerous. In particular, finding out in advance which computational tasks make positive valence gradients a superior alternative to other methods of doing the relevant computations would inform us about the sorts of cultures, societies, religions, and technologies that we should be promoting in order to give this a push in the right direction (and hopefully out-run the environments that would make negative valence gradients adaptive).

Unless we create a Singleton early on, it’s likely that by default all future entities in the long-term future will be non-altruistic pure replicators. But it is also possible that there are multiple attractors (i.e. evolutionarily stable ecosystems) in which different computational properties of consciousness are adaptive. Thus the case for pushing our evolutionary history in the right direction right now before we give up.

 Coming Next: The Hierarchy of Cooperators

Now that we covered the four worldviews, formalized what it means to be a pure replicator, and analyzed the possible future outcomes based on the computational properties of consciousness (and of valence gradients in particular), we are ready to face the game of reality in its own terms.

Team Consciousness, we need to to get our act together. We need a systematic worldview, availability of states of consciousness, set of beliefs and practices to help us prevent pure replicator takeovers.

But we cannot do this as long as we are in the dark about the sorts of entities, both consciousness-focused and pure replicators, who are likely to arise in the future in response to the selection pressures that cultural and technological change are likely to produce. In Part II of The Universal Plot we will address this and more. Stay tuned…

 



* Rather, they usually claim that, given that a Chinese Room is implemented with physical material from this universe and subject to the typical constraints of this world, it is extremely unlikely that a universe-sized look-up table would be producing the output. Hence, the algorithms that are producing the output are probably highly complex and using information processing with human-like linguistic representations, which means that, by all means, the Chinese Room it very likely understanding what it is outputting.


** Related Work:

Here is a list of literature that points in the direction of Consciousness vs. Pure Replicators. There are countless more worthwhile references, but I think that these ones are about the best:

The Biointelligence Explosion (David Pearce), Meditations on Moloch (Scott Alexander), What is a Singleton? (Nick Bostrom), Coherent Extrapolated Volition (Eliezer Yudkowsky), Simulations of God (John Lilly), Meaningness (David Chapman), The Selfish Gene (Richard Dawkins), Darwin’s Dangerous Idea (Daniel Dennett), Prometheus Rising (R. A. Wilson).

Additionally, here are some further references that address important aspects of this worlview, although they are not explicitly trying to arrive at a big picture view of the whole thing:

Neurons Gone Wild (Kevin Simler), The Age of EM (Robin Hanson), The Mating Mind (Geoffrey Miller), Joyous Cosmology (Alan Watts), The Ego Tunnel (Thomas Metzinger), The Orthogonality Thesis (Stuart Armstrong)

 

24 Predictions for the Year 3000 by David Pearce

In response to the Quora question Looking 1000 years into the future and assuming the human race is doing well, what will society be like?, David Pearce wrote:


The history of futurology to date makes sobering reading. Prophecies tend to reveal more about the emotional and intellectual limitations of the author than the future. […]
But here goes…

Year 3000

1) Superhuman bliss.

Mastery of our reward circuitry promises a future of superhuman bliss – gradients of genetically engineered well-being orders of magnitude richer than today’s “peak experiences”.
Superhappiness?
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3274778/

2) Eternal youth.

More strictly, indefinitely extended youth and effectively unlimited lifespans. Transhumans, humans and their nonhuman animal companions don’t grow old and perish. Automated off-world backups allow restoration and “respawning” in case of catastrophic accidents. “Aging” exists only in the medical archives.
SENS Research Foundation – Wikipedia

3) Full-spectrum superintelligences.

A flourishing ecology of sentient nonbiological quantum computers, hyperintelligent digital zombies and full-spectrum transhuman “cyborgs” has radiated across the Solar System. Neurochipping makes superintelligence all-pervasive. The universe seems inherently friendly: ubiquitous AI underpins the illusion that reality conspires to help us.
Superintelligence: Paths, Dangers, Strategies – Wikipedia
Artificial Intelligence @ MIRI
Kurzweil Accelerating Intelligence
Supersentience

4) Immersive VR.

“Magic” rules. “Augmented reality” of earlier centuries has been largely superseded by hyperreal virtual worlds with laws, dimensions, avatars and narrative structures wildly different from ancestral consensus reality. Selection pressure in the basement makes complete escape into virtual paradises infeasible. For the most part, infrastructure maintenance in basement reality has been delegated to zombie AI.
Augmented reality – Wikipedia
Virtual reality – Wikipedia

5) Transhuman psychedelia / novel state spaces of consciousness.

Analogues of cognition, volition and emotion as conceived by humans have been selectively retained, though with a richer phenomenology than our thin logico-linguistic thought. Other fundamental categories of mind have been discovered via genetic tinkering and pharmacological experiment. Such novel faculties are intelligently harnessed in the transhuman CNS. However, the ordinary waking consciousness of Darwinian life has been replaced by state-spaces of mind physiologically inconceivable to Homo sapiens. Gene-editing tools have opened up modes of consciousness that make the weirdest human DMT trip akin to watching paint dry. These disparate states-spaces of consciousness do share one property: they are generically blissful. “Bad trips” as undergone by human psychonauts are physically impossible because in the year 3000 the molecular signature of experience below “hedonic zero” is missing.
ShulginResearch.org
Qualia Computing

6) Supersentience / ultra-high intensity experience.

The intensity of everyday experience surpasses today’s human imagination. Size doesn’t matter to digital data-processing, but bigger brains with reprogrammed, net-enabled neurons and richer synaptic connectivity can exceed the maximum sentience of small, simple, solipsistic mind-brains shackled by the constraints of the human birth-canal. The theoretical upper limits to phenomenally bound mega-minds, and the ultimate intensity of experience, remain unclear. Intuitively, humans have a dimmer-switch model of consciousness – with e.g. ants and worms subsisting with minimal consciousness and humans at the pinnacle of the Great Chain of Being. Yet Darwinian humans may resemble sleepwalkers compared to our fourth-millennium successors. Today we say we’re “awake”, but mankind doesn’t understand what “posthuman intensity of experience” really means.
What earthly animal comes closest to human levels of sentience?

7) Reversible mind-melding.

Early in the twenty-first century, perhaps the only people who know what it’s like even partially to share a mind are the conjoined Hogan sisters. Tatiana and Krista Hogan share a thalamic bridge. Even mirror-touch synaesthetes can’t literally experience the pains and pleasures of other sentient beings. But in the year 3000, cross-species mind-melding technologies – for instance, sophisticated analogues of reversible thalamic bridges – and digital analogs of telepathy have led to a revolution in both ethics and decision-theoretic rationality.
Could Conjoined Twins Share a Mind?
Mirror-touch synesthesia – Wikipedia
Ecstasy : Utopian Pharmacology

8) The Anti-Speciesist Revolution / worldwide veganism/invitrotarianism.

Factory-farms, slaughterhouses and other Darwinian crimes against sentience have passed into the dustbin of history. Omnipresent AI cares for the vulnerable via “high-tech Jainism”. The Anti-Speciesist Revolution has made arbitrary prejudice against other sentient beings on grounds of species membership as perversely unthinkable as discrimination on grounds of ethnic group. Sentience is valued more than sapience, the prerogative of classical digital zombies (“robots”).
What is High-tech Jainism?
The Antispeciesist Revolution
‘Speciesism: Why It Is Wrong and the Implications of Rejecting It’

9) Programmable biospheres.

Sentient beings help rather than harm each other. The successors of today’s primitive CRISPR genome-editing and synthetic gene drive technologies have reworked the global ecosystem. Darwinian life was nasty, brutish and short. Extreme violence and useless suffering were endemic. In the year 3000, fertility regulation via cross-species immunocontraception has replaced predation, starvation and disease to regulate ecologically sustainable population sizes in utopian “wildlife parks”. The free-living descendants of “charismatic mega-fauna” graze happily with neo-dinosaurs, self-replicating nanobots, and newly minted exotica in surreal garden of edens. Every cubic metre of the biosphere is accessible to benign supervision – “nanny AI” for humble minds who haven’t been neurochipped for superintelligence. Other idyllic biospheres in the Solar System have been programmed from scratch.
CRISPR – Wikipedia
Genetically designing a happy biosphere
Our Biotech Future

10) The formalism of the TOE is known.
(details omitteddoes Quora support LaTeX?)

Dirac recognised the superposition principle as the fundamental principle of quantum mechanics. Wavefunction monists believe the superposition principle holds the key to reality itself. However – barring the epoch-making discovery of a cosmic Rosetta stone – the implications of some of the more interesting solutions of the master equation for subjective experience are still unknown.
Theory of everything – Wikipedia
M-theory – Wikipedia
Why does the universe exist? Why is there something rather than nothing?
Amazon.com: The Wave Function: Essays on the Metaphysics of Quantum Mechanics (9780199790548): Alyssa Ney, David Z Albert: Books

11) The Hard Problem of consciousness is solved.

The Hard Problem of consciousness was long reckoned insoluble. The Standard Model in physics from which (almost) all else springs was a bit of a mess but stunningly empirically successful at sub-Planckian energy regimes. How could physicalism and the ontological unity of science be reconciled with the existence, classically impossible binding, causal-functional efficacy and diverse palette of phenomenal experience? Mankind’s best theory of the world was inconsistent with one’s own existence, a significant shortcoming. However, all classical- and quantum-mind conjectures with predictive power had been empirically falsified by 3000 – with one exception.
Physicalism – Wikipedia
Quantum Darwinism – Wikipedia
Consciousness (Stanford Encyclopedia of Philosophy)
Hard problem of consciousness – Wikipedia
Integrated information theory – Wikipedia
Principia Qualia
Dualism – Wikipedia
New mysterianism – Wikipedia
Quantum mind – Wikipedia

[Which theory is most promising? As with the TOE, you’ll forgive me for skipping the details. In any case, my ideas are probably too idiosyncratic to be of wider interest, but for anyone curious: What is the Quantum Mind?]

12) The Meaning of Life resolved.

Everyday life is charged with a profound sense of meaning and significance. Everyone feels valuable and valued. Contrast the way twenty-first century depressives typically found life empty, absurd or meaningless; and how even “healthy” normals were sometimes racked by existential angst. Or conversely, compare how people with bipolar disorder experienced megalomania and messianic delusions when uncontrollably manic. Hyperthymic civilization in the year 3000 records no such pathologies of mind or deficits in meaning. Genetically preprogrammed gradients of invincible bliss ensure that all sentient beings find life self-intimatingly valuable. Transhumans love themselves, love life, and love each other.
https://www.transhumanism.com/

13) Beautiful new emotions.

Nasty human emotions have been retired – with or without the recruitment of functional analogs to play their former computational role. Novel emotions have been biologically synthesised and their “raw feels” encephalised and integrated into the CNS. All emotion is beautiful. The pleasure axis has replaced the pleasure-pain axis as the engine of civilised life.
An information-theoretic perspective on life in Heaven

14) Effectively unlimited material abundance / molecular nanotechnology.

Status goods long persisted in basement reality, as did relics of the cash nexus on the blockchain. Yet in a world where both computational resources and the substrates of pure bliss aren’t rationed, such ugly evolutionary hangovers first withered, then died.
http://metamodern.com/about-the-author/
Blockchain – Wikipedia

15) Posthuman aesthetics / superhuman beauty.

The molecular signatures of aesthetic experience have been identified, purified and overexpressed. Life is saturated with superhuman beauty. What passed for “Great Art” in the Darwinian era is no more impressive than year 2000 humans might judge, say, a child’s painting by numbers or Paleolithic daubings and early caveporn. Nonetheless, critical discernment is retained. Transhumans are blissful but not “blissed out” – or not all of them at any rate.
Art – Wikipedia
http://www.sciencemag.org/news/2009/05/earliest-pornography

16) Gender transformation.

Like gills or a tail, “gender” in the human sense is a thing of the past. We might call some transhuman minds hyper-masculine (the “ultrahigh AQ” hyper-systematisers), others hyperfeminine (“ultralow AQ” hyper-empathisers), but transhuman cognitive styles transcend such crude dichotomies, and can be shifted almost at will via embedded AI. Many transhumans are asexual, others pan-sexual, a few hypersexual, others just sexually inquisitive. “The degree and kind of a man’s sexuality reach up into the ultimate pinnacle of his spirit”, said Nietzsche – which leads to (17).

Object Sexuality – Wikipedia
Empathizing & Systematizing Theory – Wikipedia
https://www.livescience.com/2094-homosexuality-turned-fruit-flies.html
https://www.wired.com/2001/12/aqtest/

17) Physical superhealth.

In 3000, everyone feels physically and psychologically “better than well”. Darwinian pathologies of the flesh such as fatigue, the “leaden paralysis” of chronic depressives, and bodily malaise of any kind are inconceivable. The (comparatively) benign “low pain” alleles of the SCN9A gene that replaced their nastier ancestral cousins have been superseded by AI-based nociception with optional manual overrides. Multi-sensory bodily “superpowers” are the norm. Everyone loves their body-images in virtual and basement reality alike. Morphological freedom is effectively unbounded. Awesome robolovers, nights of superhuman sensual passion, 48-hour whole-body orgasms, and sexual practices that might raise eyebrows among prudish Darwinians have multiplied. Yet life isn’t a perpetual orgy. Academic subcultures pursue analogues of Mill’s “higher pleasures”. Paradise engineering has become a rigorous discipline. That said, a lot of transhumans are hedonists who essentially want to have superhuman fun. And why not?
https://www.wired.com/2017/04/the-cure-for-pain/
http://io9.gizmodo.com/5946914/should-we-eliminate-the-human-ability-to-feel-pain
http://www.bbc.com/future/story/20140321-orgasms-at-the-push-of-a-button

18) World government.

Routine policy decisions in basement reality have been offloaded to ultra-intelligent zombie AI. The quasi-psychopathic relationships of Darwinian life – not least the zero-sum primate status-games of the African savannah – are ancient history. Some conflict-resolution procedures previously off-loaded to AI have been superseded by diplomatic “mind-melds”. In the words of Henry Wadsworth Longfellow, “If we could read the secret history of our enemies, we should find in each man’s life sorrow and suffering enough to disarm all hostility.” Our descendants have windows into each other’s souls, so to speak.

19) Historical amnesia.

The world’s last experience below “hedonic zero” marked a major evolutionary transition in the evolutionary development of life. In 3000, the nature of sub-zero states below Sidgwick’s “natural watershed” isn’t understood except by analogy: some kind of phase transition in consciousness below life’s lowest hedonic floor – a hedonic floor that is being genetically ratcheted upwards as life becomes ever more wonderful. Transhumans are hyper-empathetic. They get off on each other’s joys. Yet paradoxically, transhuman mental superhealth depends on biological immunity to true comprehension of the nasty stuff elsewhere in the universal wavefunction that even mature superintelligence is impotent to change. Maybe the nature of e.g. Darwinian life, and the minds of malaise-ridden primitives in inaccessible Everett branches, doesn’t seem any more interesting than we find books on the Dark Ages. Negative utilitarianism, if it were conceivable, might be viewed as a depressive psychosis. “Life is suffering”, said Gautama Buddha, but fourth millennials feel in the roots of their being that Life is bliss.
Invincible ignorance? Perhaps.
Negative Utilitarianism – Wikipedia

20) Super-spirituality.

A tough one to predict. But neuroscience can soon identify the molecular signatures of spiritual experience, refine them, and massively amplify their molecular substrates. Perhaps some fourth millennials enjoy lifelong spiritual ecstasies beyond the mystical epiphanies of temporal-lobe epileptics. Secular rationalists don’t know what we’re missing.
https://www.newscientist.com/article/mg22129531-000-ecstatic-epilepsy-how-seizures-can-be-bliss/

21) The Reproductive Revolution.
Reproduction is uncommon in a post-aging society. Most transhumans originate as extra-uterine “designer babies”. The reckless genetic experimentation of sexual reproduction had long seemed irresponsible. Old habits still died hard. By year 3000, the genetic crapshoot of Darwinian life has finally been replaced by precision-engineered sentience. Early critics of “eugenics” and a “Brave New World” have discovered by experience that a “triple S” civilisation of superhappiness, superlongevity and superintelligence isn’t as bad as they supposed.
https://www.reproductive-revolution.com/
https://www.huxley.net/

22) Globish (“English Plus”).

Automated real-time translation has been superseded by a common tongue – Globish – spoken, written or “telepathically” communicated. Partial translation manuals for mutually alien state-spaces of consciousness exist, but – as twentieth century Kuhnians would have put it – such state-spaces tend to be incommensurable and their concepts state-specific. Compare how poorly lucid dreamers can communicate with “awake” humans. Many Darwinian terms and concepts are effectively obsolete. In their place, active transhumanist vocabularies of millions of words are common. “Basic Globish” is used for communication with humble minds, i.e. human and nonhuman animals who haven’t been fully uplifted.
Incommensurability – SEoP
Uplift (science_fiction) – Wikipedia

23) Plans for Galactic colonization.

Terraforming and 3D-bioprinting of post-Darwinian life on nearby solar systems is proceeding apace. Vacant ecological niches tend to get filled. In earlier centuries, a synthesis of cryonics, crude reward pathway enhancements and immersive VR software, combined with revolutionary breakthroughs in rocket propulsion, led to the launch of primitive manned starships. Several are still starbound. Some transhuman utilitarian ethicists and policy-makers favour creating a utilitronium shockwave beyond the pale of civilisation to convert matter and energy into pure pleasure. Year 3000 bioconservatives focus on promoting life animated by gradients of superintelligent bliss. Yet no one objects to pure “hedonium” replacing unprogrammed matter.
Interstellar Travel – Wikipedia
Utilitarianism – Wikipedia

24) The momentous “unknown unknown”.

If you read a text and the author’s last words are “and then I woke up”, everything you’ve read must be interpreted in a new light – semantic holism with a vengeance. By the year 3000, some earth-shattering revelation may have changed everything – some fundamental background assumption of earlier centuries has been overturned that might not have been explicitly represented in our conceptual scheme. If it exists, then I’ve no inkling what this “unknown unknown” might be, unless it lies hidden in the untapped subjective properties of matter and energy. Christian readers might interject “The Second Coming”. Learning the Simulation Hypothesis were true would be a secular example of such a revelation. Some believers in an AI “Intelligence Explosion” speak delphically of “The Singularity”. Whatever – Shakespeare made the point more poetically, “There are more things in heaven and earth, Horatio, Than are dreamt of in your philosophy”.

As it stands, yes, (24) is almost vacuous. Yet compare how the philosophers of classical antiquity who came closest to recognising their predicament weren’t intellectual titans like Plato or Aristotle, but instead the radical sceptics. The sceptics guessed they were ignorant in ways that transcended the capacity of their conceptual scheme to articulate. By the lights of the fourth millennium, what I’m writing, and what you’re reading, may be stultified by something that humans don’t know and can’t express.
Ancient Skepticism – SEoP

**********************************************************************

OK, twenty-four predictions! Successful prophets tend to locate salvation or doom within the credible lifetime of their intended audience. The questioner asks about life in the year 3000 rather than, say, a Kurzweilian 2045. In my view, everyone reading this text will grow old and die before the predictions of this answer are realised or confounded – with one possible complication.

Opt-out cryonics and opt-in cryothanasia are feasible long before the conquest of aging. Visiting grandpa in the cryonics facility can turn death into an event in life. I’m not convinced that posthuman superintelligence will reckon that Darwinian malware should be revived in any shape or form. Yet if you want to wake up one morning in posthuman paradise – and I do see the appeal – then options exist:
http://www.alcor.org/

********************************************************************
p.s. I’m curious about the credence (if any) the reader would assign to the scenarios listed here.