24 Predictions for the Year 3000 by David Pearce

In response to the Quora question Looking 1000 years into the future and assuming the human race is doing well, what will society be like?, David Pearce wrote:


The history of futurology to date makes sobering reading. Prophecies tend to reveal more about the emotional and intellectual limitations of the author than the future. […]
But here goes…

Year 3000

1) Superhuman bliss.

Mastery of our reward circuitry promises a future of superhuman bliss – gradients of genetically engineered well-being orders of magnitude richer than today’s “peak experiences”.
Superhappiness?
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3274778/

2) Eternal youth.

More strictly, indefinitely extended youth and effectively unlimited lifespans. Transhumans, humans and their nonhuman animal companions don’t grow old and perish. Automated off-world backups allow restoration and “respawning” in case of catastrophic accidents. “Aging” exists only in the medical archives.
SENS Research Foundation – Wikipedia

3) Full-spectrum superintelligences.

A flourishing ecology of sentient nonbiological quantum computers, hyperintelligent digital zombies and full-spectrum transhuman “cyborgs” has radiated across the Solar System. Neurochipping makes superintelligence all-pervasive. The universe seems inherently friendly: ubiquitous AI underpins the illusion that reality conspires to help us.
Superintelligence: Paths, Dangers, Strategies – Wikipedia
Artificial Intelligence @ MIRI
Kurzweil Accelerating Intelligence
Supersentience

4) Immersive VR.

“Magic” rules. “Augmented reality” of earlier centuries has been largely superseded by hyperreal virtual worlds with laws, dimensions, avatars and narrative structures wildly different from ancestral consensus reality. Selection pressure in the basement makes complete escape into virtual paradises infeasible. For the most part, infrastructure maintenance in basement reality has been delegated to zombie AI.
Augmented reality – Wikipedia
Virtual reality – Wikipedia

5) Transhuman psychedelia / novel state spaces of consciousness.

Analogues of cognition, volition and emotion as conceived by humans have been selectively retained, though with a richer phenomenology than our thin logico-linguistic thought. Other fundamental categories of mind have been discovered via genetic tinkering and pharmacological experiment. Such novel faculties are intelligently harnessed in the transhuman CNS. However, the ordinary waking consciousness of Darwinian life has been replaced by state-spaces of mind physiologically inconceivable to Homo sapiens. Gene-editing tools have opened up modes of consciousness that make the weirdest human DMT trip akin to watching paint dry. These disparate states-spaces of consciousness do share one property: they are generically blissful. “Bad trips” as undergone by human psychonauts are physically impossible because in the year 3000 the molecular signature of experience below “hedonic zero” is missing.
ShulginResearch.org
Qualia Computing

6) Supersentience / ultra-high intensity experience.

The intensity of everyday experience surpasses today’s human imagination. Size doesn’t matter to digital data-processing, but bigger brains with reprogrammed, net-enabled neurons and richer synaptic connectivity can exceed the maximum sentience of small, simple, solipsistic mind-brains shackled by the constraints of the human birth-canal. The theoretical upper limits to phenomenally bound mega-minds, and the ultimate intensity of experience, remain unclear. Intuitively, humans have a dimmer-switch model of consciousness – with e.g. ants and worms subsisting with minimal consciousness and humans at the pinnacle of the Great Chain of Being. Yet Darwinian humans may resemble sleepwalkers compared to our fourth-millennium successors. Today we say we’re “awake”, but mankind doesn’t understand what “posthuman intensity of experience” really means.
What earthly animal comes closest to human levels of sentience?

7) Reversible mind-melding.

Early in the twenty-first century, perhaps the only people who know what it’s like even partially to share a mind are the conjoined Hogan sisters. Tatiana and Krista Hogan share a thalamic bridge. Even mirror-touch synaesthetes can’t literally experience the pains and pleasures of other sentient beings. But in the year 3000, cross-species mind-melding technologies – for instance, sophisticated analogues of reversible thalamic bridges – and digital analogs of telepathy have led to a revolution in both ethics and decision-theoretic rationality.
Could Conjoined Twins Share a Mind?
Mirror-touch synesthesia – Wikipedia
Ecstasy : Utopian Pharmacology

8) The Anti-Speciesist Revolution / worldwide veganism/invitrotarianism.

Factory-farms, slaughterhouses and other Darwinian crimes against sentience have passed into the dustbin of history. Omnipresent AI cares for the vulnerable via “high-tech Jainism”. The Anti-Speciesist Revolution has made arbitrary prejudice against other sentient beings on grounds of species membership as perversely unthinkable as discrimination on grounds of ethnic group. Sentience is valued more than sapience, the prerogative of classical digital zombies (“robots”).
What is High-tech Jainism?
The Antispeciesist Revolution
‘Speciesism: Why It Is Wrong and the Implications of Rejecting It’

9) Programmable biospheres.

Sentient beings help rather than harm each other. The successors of today’s primitive CRISPR genome-editing and synthetic gene drive technologies have reworked the global ecosystem. Darwinian life was nasty, brutish and short. Extreme violence and useless suffering were endemic. In the year 3000, fertility regulation via cross-species immunocontraception has replaced predation, starvation and disease to regulate ecologically sustainable population sizes in utopian “wildlife parks”. The free-living descendants of “charismatic mega-fauna” graze happily with neo-dinosaurs, self-replicating nanobots, and newly minted exotica in surreal garden of edens. Every cubic metre of the biosphere is accessible to benign supervision – “nanny AI” for humble minds who haven’t been neurochipped for superintelligence. Other idyllic biospheres in the Solar System have been programmed from scratch.
CRISPR – Wikipedia
Genetically designing a happy biosphere
Our Biotech Future

10) The formalism of the TOE is known.
(details omitteddoes Quora support LaTeX?)

Dirac recognised the superposition principle as the fundamental principle of quantum mechanics. Wavefunction monists believe the superposition principle holds the key to reality itself. However – barring the epoch-making discovery of a cosmic Rosetta stone – the implications of some of the more interesting solutions of the master equation for subjective experience are still unknown.
Theory of everything – Wikipedia
M-theory – Wikipedia
Why does the universe exist? Why is there something rather than nothing?
Amazon.com: The Wave Function: Essays on the Metaphysics of Quantum Mechanics (9780199790548): Alyssa Ney, David Z Albert: Books

11) The Hard Problem of consciousness is solved.

The Hard Problem of consciousness was long reckoned insoluble. The Standard Model in physics from which (almost) all else springs was a bit of a mess but stunningly empirically successful at sub-Planckian energy regimes. How could physicalism and the ontological unity of science be reconciled with the existence, classically impossible binding, causal-functional efficacy and diverse palette of phenomenal experience? Mankind’s best theory of the world was inconsistent with one’s own existence, a significant shortcoming. However, all classical- and quantum-mind conjectures with predictive power had been empirically falsified by 3000 – with one exception.
Physicalism – Wikipedia
Quantum Darwinism – Wikipedia
Consciousness (Stanford Encyclopedia of Philosophy)
Hard problem of consciousness – Wikipedia
Integrated information theory – Wikipedia
Principia Qualia
Dualism – Wikipedia
New mysterianism – Wikipedia
Quantum mind – Wikipedia

[Which theory is most promising? As with the TOE, you’ll forgive me for skipping the details. In any case, my ideas are probably too idiosyncratic to be of wider interest, but for anyone curious: What is the Quantum Mind?]

12) The Meaning of Life resolved.

Everyday life is charged with a profound sense of meaning and significance. Everyone feels valuable and valued. Contrast the way twenty-first century depressives typically found life empty, absurd or meaningless; and how even “healthy” normals were sometimes racked by existential angst. Or conversely, compare how people with bipolar disorder experienced megalomania and messianic delusions when uncontrollably manic. Hyperthymic civilization in the year 3000 records no such pathologies of mind or deficits in meaning. Genetically preprogrammed gradients of invincible bliss ensure that all sentient beings find life self-intimatingly valuable. Transhumans love themselves, love life, and love each other.
https://www.transhumanism.com/

13) Beautiful new emotions.

Nasty human emotions have been retired – with or without the recruitment of functional analogs to play their former computational role. Novel emotions have been biologically synthesised and their “raw feels” encephalised and integrated into the CNS. All emotion is beautiful. The pleasure axis has replaced the pleasure-pain axis as the engine of civilised life.
An information-theoretic perspective on life in Heaven

14) Effectively unlimited material abundance / molecular nanotechnology.

Status goods long persisted in basement reality, as did relics of the cash nexus on the blockchain. Yet in a world where both computational resources and the substrates of pure bliss aren’t rationed, such ugly evolutionary hangovers first withered, then died.
http://metamodern.com/about-the-author/
Blockchain – Wikipedia

15) Posthuman aesthetics / superhuman beauty.

The molecular signatures of aesthetic experience have been identified, purified and overexpressed. Life is saturated with superhuman beauty. What passed for “Great Art” in the Darwinian era is no more impressive than year 2000 humans might judge, say, a child’s painting by numbers or Paleolithic daubings and early caveporn. Nonetheless, critical discernment is retained. Transhumans are blissful but not “blissed out” – or not all of them at any rate.
Art – Wikipedia
http://www.sciencemag.org/news/2009/05/earliest-pornography

16) Gender transformation.

Like gills or a tail, “gender” in the human sense is a thing of the past. We might call some transhuman minds hyper-masculine (the “ultrahigh AQ” hyper-systematisers), others hyperfeminine (“ultralow AQ” hyper-empathisers), but transhuman cognitive styles transcend such crude dichotomies, and can be shifted almost at will via embedded AI. Many transhumans are asexual, others pan-sexual, a few hypersexual, others just sexually inquisitive. “The degree and kind of a man’s sexuality reach up into the ultimate pinnacle of his spirit”, said Nietzsche – which leads to (17).

Object Sexuality – Wikipedia
Empathizing & Systematizing Theory – Wikipedia
https://www.livescience.com/2094-homosexuality-turned-fruit-flies.html
https://www.wired.com/2001/12/aqtest/

17) Physical superhealth.

In 3000, everyone feels physically and psychologically “better than well”. Darwinian pathologies of the flesh such as fatigue, the “leaden paralysis” of chronic depressives, and bodily malaise of any kind are inconceivable. The (comparatively) benign “low pain” alleles of the SCN9A gene that replaced their nastier ancestral cousins have been superseded by AI-based nociception with optional manual overrides. Multi-sensory bodily “superpowers” are the norm. Everyone loves their body-images in virtual and basement reality alike. Morphological freedom is effectively unbounded. Awesome robolovers, nights of superhuman sensual passion, 48-hour whole-body orgasms, and sexual practices that might raise eyebrows among prudish Darwinians have multiplied. Yet life isn’t a perpetual orgy. Academic subcultures pursue analogues of Mill’s “higher pleasures”. Paradise engineering has become a rigorous discipline. That said, a lot of transhumans are hedonists who essentially want to have superhuman fun. And why not?
https://www.wired.com/2017/04/the-cure-for-pain/
http://io9.gizmodo.com/5946914/should-we-eliminate-the-human-ability-to-feel-pain
http://www.bbc.com/future/story/20140321-orgasms-at-the-push-of-a-button

18) World government.

Routine policy decisions in basement reality have been offloaded to ultra-intelligent zombie AI. The quasi-psychopathic relationships of Darwinian life – not least the zero-sum primate status-games of the African savannah – are ancient history. Some conflict-resolution procedures previously off-loaded to AI have been superseded by diplomatic “mind-melds”. In the words of Henry Wadsworth Longfellow, “If we could read the secret history of our enemies, we should find in each man’s life sorrow and suffering enough to disarm all hostility.” Our descendants have windows into each other’s souls, so to speak.

19) Historical amnesia.

The world’s last experience below “hedonic zero” marked a major evolutionary transition in the evolutionary development of life. In 3000, the nature of sub-zero states below Sidgwick’s “natural watershed” isn’t understood except by analogy: some kind of phase transition in consciousness below life’s lowest hedonic floor – a hedonic floor that is being genetically ratcheted upwards as life becomes ever more wonderful. Transhumans are hyper-empathetic. They get off on each other’s joys. Yet paradoxically, transhuman mental superhealth depends on biological immunity to true comprehension of the nasty stuff elsewhere in the universal wavefunction that even mature superintelligence is impotent to change. Maybe the nature of e.g. Darwinian life, and the minds of malaise-ridden primitives in inaccessible Everett branches, doesn’t seem any more interesting than we find books on the Dark Ages. Negative utilitarianism, if it were conceivable, might be viewed as a depressive psychosis. “Life is suffering”, said Gautama Buddha, but fourth millennials feel in the roots of their being that Life is bliss.
Invincible ignorance? Perhaps.
Negative Utilitarianism – Wikipedia

20) Super-spirituality.

A tough one to predict. But neuroscience can soon identify the molecular signatures of spiritual experience, refine them, and massively amplify their molecular substrates. Perhaps some fourth millennials enjoy lifelong spiritual ecstasies beyond the mystical epiphanies of temporal-lobe epileptics. Secular rationalists don’t know what we’re missing.
https://www.newscientist.com/article/mg22129531-000-ecstatic-epilepsy-how-seizures-can-be-bliss/

21) The Reproductive Revolution.
Reproduction is uncommon in a post-aging society. Most transhumans originate as extra-uterine “designer babies”. The reckless genetic experimentation of sexual reproduction had long seemed irresponsible. Old habits still died hard. By year 3000, the genetic crapshoot of Darwinian life has finally been replaced by precision-engineered sentience. Early critics of “eugenics” and a “Brave New World” have discovered by experience that a “triple S” civilisation of superhappiness, superlongevity and superintelligence isn’t as bad as they supposed.
https://www.reproductive-revolution.com/
https://www.huxley.net/

22) Globish (“English Plus”).

Automated real-time translation has been superseded by a common tongue – Globish – spoken, written or “telepathically” communicated. Partial translation manuals for mutually alien state-spaces of consciousness exist, but – as twentieth century Kuhnians would have put it – such state-spaces tend to be incommensurable and their concepts state-specific. Compare how poorly lucid dreamers can communicate with “awake” humans. Many Darwinian terms and concepts are effectively obsolete. In their place, active transhumanist vocabularies of millions of words are common. “Basic Globish” is used for communication with humble minds, i.e. human and nonhuman animals who haven’t been fully uplifted.
Incommensurability – SEoP
Uplift (science_fiction) – Wikipedia

23) Plans for Galactic colonization.

Terraforming and 3D-bioprinting of post-Darwinian life on nearby solar systems is proceeding apace. Vacant ecological niches tend to get filled. In earlier centuries, a synthesis of cryonics, crude reward pathway enhancements and immersive VR software, combined with revolutionary breakthroughs in rocket propulsion, led to the launch of primitive manned starships. Several are still starbound. Some transhuman utilitarian ethicists and policy-makers favour creating a utilitronium shockwave beyond the pale of civilisation to convert matter and energy into pure pleasure. Year 3000 bioconservatives focus on promoting life animated by gradients of superintelligent bliss. Yet no one objects to pure “hedonium” replacing unprogrammed matter.
Interstellar Travel – Wikipedia
Utilitarianism – Wikipedia

24) The momentous “unknown unknown”.

If you read a text and the author’s last words are “and then I woke up”, everything you’ve read must be interpreted in a new light – semantic holism with a vengeance. By the year 3000, some earth-shattering revelation may have changed everything – some fundamental background assumption of earlier centuries has been overturned that might not have been explicitly represented in our conceptual scheme. If it exists, then I’ve no inkling what this “unknown unknown” might be, unless it lies hidden in the untapped subjective properties of matter and energy. Christian readers might interject “The Second Coming”. Learning the Simulation Hypothesis were true would be a secular example of such a revelation. Some believers in an AI “Intelligence Explosion” speak delphically of “The Singularity”. Whatever – Shakespeare made the point more poetically, “There are more things in heaven and earth, Horatio, Than are dreamt of in your philosophy”.

As it stands, yes, (24) is almost vacuous. Yet compare how the philosophers of classical antiquity who came closest to recognising their predicament weren’t intellectual titans like Plato or Aristotle, but instead the radical sceptics. The sceptics guessed they were ignorant in ways that transcended the capacity of their conceptual scheme to articulate. By the lights of the fourth millennium, what I’m writing, and what you’re reading, may be stultified by something that humans don’t know and can’t express.
Ancient Skepticism – SEoP

**********************************************************************

OK, twenty-four predictions! Successful prophets tend to locate salvation or doom within the credible lifetime of their intended audience. The questioner asks about life in the year 3000 rather than, say, a Kurzweilian 2045. In my view, everyone reading this text will grow old and die before the predictions of this answer are realised or confounded – with one possible complication.

Opt-out cryonics and opt-in cryothanasia are feasible long before the conquest of aging. Visiting grandpa in the cryonics facility can turn death into an event in life. I’m not convinced that posthuman superintelligence will reckon that Darwinian malware should be revived in any shape or form. Yet if you want to wake up one morning in posthuman paradise – and I do see the appeal – then options exist:
http://www.alcor.org/

********************************************************************
p.s. I’m curious about the credence (if any) the reader would assign to the scenarios listed here.

Why I think the Foundational Research Institute should rethink its approach

by Mike Johnson

The following is my considered evaluation of the Foundational Research Institute, circa July 2017. I discuss its goal, where I foresee things going wrong with how it defines suffering, and what it could do to avoid these problems.

TL;DR version: functionalism (“consciousness is the sum-total of the functional properties of our brains”) sounds a lot better than it actually turns out to be in practice. In particular, functionalism makes it impossible to define ethics & suffering in a way that can mediate disagreements.

I. What is the Foundational Research Institute?

The Foundational Research Institute (FRI) is a Berlin-based group that “conducts research on how to best reduce the suffering of sentient beings in the near and far future.” Executive Director Max Daniel introduced them at EA Global Boston as “the only EA organization which at an organizational level has the mission of focusing on reducing s-risk.” S-risks are, according to Daniel, “risks where an adverse outcome would bring about suffering on an astronomical scale, vastly exceeding all suffering that has existed on Earth so far.”

Essentially, FRI wants to become the research arm of suffering-focused ethics, and help prevent artificial general intelligence (AGI) failure-modes which might produce suffering on a cosmic scale.

What I like about FRI:

While I have serious qualms about FRI’s research framework, I think the people behind FRI deserve a lot of credit- they seem to be serious people, working hard to build something good. In particular, I want to give them a shoutout for three things:

  • First, FRI takes suffering seriously, and I think that’s important. When times are good, we tend to forget how tongue-chewingly horrific suffering can be. S-risks seem particularly horrifying.
  • Second, FRI isn’t afraid of being weird. FRI has been working on s-risk research for a few years now, and if people are starting to come around to the idea that s-risks are worth thinking about, much of the credit goes to FRI.
  • Third, I have great personal respect for Brian Tomasik, one of FRI’s co-founders. I’ve found him highly thoughtful, generous in debates, and unfailingly principled. In particular, he’s always willing to bite the bullet and work ideas out to their logical end, even if it involves repugnant conclusions.

What is FRI’s research framework?

FRI believes in analytic functionalism, or what David Chalmers calls “Type-A materialism”. Essentially, what this means is there’s no ’theoretical essence’ to consciousness; rather, consciousness is the sum-total of the functional properties of our brains. Since ‘functional properties’ are rather vague, this means consciousness itself is rather vague, in the same way words like “life,” “justice,” and “virtue” are messy and vague.

Brian suggests that this vagueness means there’s an inherently subjective, perhaps arbitrary element to how we define consciousness:

Analytic functionalism looks for functional processes in the brain that roughly capture what we mean by words like “awareness”, “happy”, etc., in a similar way as a biologist may look for precise properties of replicators that roughly capture what we mean by “life”. Just as there can be room for fuzziness about where exactly to draw the boundaries around “life”, different analytic functionalists may have different opinions about where to define the boundaries of “consciousness” and other mental states. This is why consciousness is “up to us to define”. There’s no hard problem of consciousness for the same reason there’s no hard problem of life: consciousness is just a high-level word that we use to refer to lots of detailed processes, and it doesn’t mean anything in addition to those processes.

Finally, Brian argues that the phenomenology of consciousness is identical with the phenomenology of computation:

I know that I’m conscious. I also know, from neuroscience combined with Occam’s razor, that my consciousness consists only of material operations in my brain — probably mostly patterns of neuronal firing that help process inputs, compute intermediate ideas, and produce behavioral outputs. Thus, I can see that consciousness is just the first-person view of certain kinds of computations — as Eliezer Yudkowsky puts it, “How An Algorithm Feels From Inside“. Consciousness is not something separate from or epiphenomenal to these computations. It is these computations, just from their own perspective of trying to think about themselves.

 

In other words, consciousness is what minds compute. Consciousness is the collection of input operations, intermediate processing, and output behaviors that an entity performs.

And if consciousness is all these things, so too is suffering. Which means suffering is computational, yet also inherently fuzzy, and at least a bit arbitrary; a leaky high-level reification impossible to speak about accurately, since there’s no formal, objective “ground truth”.

II. Why do I worry about FRI’s research framework?

In short, I think FRI has a worthy goal and good people, but its metaphysics actively prevent making progress toward that goal. The following describes why I think that, drawing heavily on Brian’s writings (of FRI’s researchers, Brian seems the most focused on metaphysics):

Note: FRI is not the only EA organization which holds functionalist views on consciousness; much of the following critique would also apply to e.g. MIRI, FHI, and OpenPhil. I focus on FRI because (1) Brian’s writings on consciousness & functionalism have been hugely influential in the community, and are clear enough *to* criticize; (2) the fact that FRI is particularly clear about what it cares about- suffering- allows a particularly clear critique about what problems it will run into with functionalism; (3) I believe FRI is at the forefront of an important cause area which has not crystallized yet, and I think it’s critically important to get these objections bouncing around this subcommunity.

Objection 1: Motte-and-bailey

Brian: “Consciousness is not a thing which exists ‘out there’ or even a separate property of matter; it’s a definitional category into which we classify minds. ‘Is this digital mind really conscious?’ is analogous to ‘Is a rock that people use to eat on really a table?’ [However,] That consciousness is a cluster in thingspace rather than a concrete property of the world does not make reducing suffering less important.”

The FRI model seems to imply that suffering is ineffable enough such that we can’t have an objective definition, yet sufficiently effable that we can coherently talk and care about it. This attempt to have it both ways seems contradictory, or at least in deep tension.

Indeed, I’d argue that the degree to which you can care about something is proportional to the degree to which you can define it objectively. E.g., If I say that “gnireffus” is literally the most terrible thing in the cosmos, that we should spread gnireffus-focused ethics, and that minimizing g-risks (far-future scenarios which involve large amounts of gnireffus) is a moral imperative, but also that what is and what and isn’t gnireffus is rather subjective with no privileged definition, and it’s impossible to objectively tell if a physical system exhibits gnireffus, you might raise any number of objections. This is not an exact metaphor for FRI’s position, but I worry that FRI’s work leans on the intuition that suffering is real and we can speak coherently about it, to a degree greater than its metaphysics formally allow.

Max Daniel (personal communication) suggests that we’re comfortable with a degree of ineffability in other contexts; “Brian claims that the concept of suffering shares the allegedly problematic properties with the concept of a table. But it seems a stretch to say that the alleged tension is problematic when talking about tables. So why would it be problematic when talking about suffering?” However, if we take the anti-realist view that suffering is ‘merely’ a node in the network of language, we have to live with the consequences of this: that ‘suffering’ will lose meaning as we take it away from the network in which it’s embedded (Wittgenstein). But FRI wants to do exactly this, to speak about suffering in the context of AGIs, simulated brains, even video game characters.

We can be anti-realists about suffering (suffering-is-a-node-in-the-network-of-language), or we can argue that we can talk coherently about suffering in novel contexts (AGIs, mind crime, aliens, and so on), but it seems inherently troublesome to claim we can do both at the same time.

Objection 2: Intuition duels

Two people can agree on FRI’s position that there is no objective fact of the matter about what suffering is (no privileged definition), but this also means they have no way of coming to any consensus on the object-level question of whether something can suffer. This isn’t just an academic point: Brian has written extensively about how he believes non-human animals can and do suffer extensively, whereas Yudkowsky (who holds computationalist views, like Brian) has written about how he’s confident that animals are not conscious and cannot suffer, due to their lack of higher-order reasoning.

And if functionalism is having trouble adjudicating the easy cases of suffering–whether monkeys can suffer, or whether dogs can— it doesn’t have a sliver of a chance at dealing with the upcoming hard cases of suffering: whether a given AGI is suffering, or engaging in mind crime; whether a whole-brain emulation (WBE) or synthetic organism or emergent intelligence that doesn’t have the capacity to tell us how it feels (or that we don’t have the capacity to understand) is suffering; if any aliens that we meet in the future can suffer; whether changing the internal architecture of our qualia reports means we’re also changing our qualia; and so on.

In short, FRI’s theory of consciousness isn’t actually a theory of consciousness at all, since it doesn’t do the thing we need a theory of consciousness to do: adjudicate disagreements in a principled way. Instead, it gives up any claim on the sorts of objective facts which could in principle adjudicate disagreements.

This is a source of friction in EA today, but it’s mitigated by the sense that

(1) The EA pie is growing, so it’s better to ignore disagreements than pick fights;

(2) Disagreements over the definition of suffering don’t really matter yet, since we haven’t gotten into the business of making morally-relevant synthetic beings (that we know of) that might be unable to vocalize their suffering.

If the perception of one or both of these conditions change, the lack of some disagreement-adjudicating theory of suffering will matter quite a lot.

Objection 3: Convergence requires common truth

Mike: “[W]hat makes one definition of consciousness better than another? How should we evaluate them?”

Brian: “Consilience among our feelings of empathy, principles of non-discrimination, understandings of cognitive science, etc. It’s similar to the question of what makes one definition of justice or virtue better than another.”

Brian is hoping that affective neuroscience will slowly converge to accurate views on suffering as more and better data about sentience and pain accumulates. But convergence to truth implies something (objective) driving the convergence- in this way, Brian’s framework still seems to require an objective truth of the matter, even though he disclaims most of the benefits of assuming this.

Objection 4: Assuming that consciousness is a reification produces more confusion, not less

Brian: “Consciousness is not a reified thing; it’s not a physical property of the universe that just exists intrinsically. Rather, instances of consciousness are algorithms that are implemented in specific steps. … Consciousness involves specific things that brains do.”

Brian argues that we treat conscious/phenomenology as more ‘real’ than it is. Traditionally, whenever we’ve discovered something is a leaky reification and shouldn’t be treated as ‘too real’, we’ve been able to break it down into more coherent constituent pieces we can treat as real. Life, for instance, wasn’t due to élan vital but a bundle of self-organizing properties & dynamics which generally co-occur. But carrying out this “de-reification” process on consciousness– enumerating its coherent constituent pieces– has proven difficult, especially if we want to preserve some way to speak cogently about suffering.

Speaking for myself, the more I stared into the depths of functionalism, the less certain everything about moral value became– and arguably, I see the same trajectory in Brian’s work and Luke Muehlhauser’s report. Their model uncertainty has seemingly become larger as they’ve looked into techniques for how to “de-reify” consciousness while preserving some flavor of moral value, not smaller. Brian and Luke seem to interpret this as evidence that moral value is intractably complicated, but this is also consistent with consciousness not being a reification, and instead being a real thing. Trying to “de-reify” something that’s not a reification will produce deep confusion, just as surely trying to treat a reification as ‘more real’ than it actually is will.

Edsger W. Dijkstra famously noted that “The purpose of abstraction is not to be vague, but to create a new semantic level in which one can be absolutely precise.” And so if our ways of talking about moral value fail to ‘carve reality at the joints’- then by all means let’s build better ones, rather than giving up on precision.

Objection 5: The Hard Problem of Consciousness is a red herring

Brian spends a lot of time discussing Chalmers’ “Hard Problem of Consciousness”, i.e. the question of why we’re subjectively conscious, and seems to base at least part of his conclusion on not finding this question compelling— he suggests “There’s no hard problem of consciousness for the same reason there’s no hard problem of life: consciousness is just a high-level word that we use to refer to lots of detailed processes, and it doesn’t mean anything in addition to those processes.” I.e., no ‘why’ is necessary; when we take consciousness and subtract out the details of the brain, we’re left with an empty set.

But I think the “Hard Problem” isn’t helpful as a contrastive centerpiece, since it’s unclear what the problem is, and whether it’s analytic or empirical, a statement about cognition or about physics. At the Qualia Research Institute (QRI), we don’t talk much about the Hard Problem; instead, we talk about Qualia Formalism, or the idea that any phenomenological state can be crisply and precisely represented by some mathematical object. I suspect this would be a better foil for Brian’s work than the Hard Problem.

Objection 6: Mapping to reality

Brian argues that consciousness should be defined at the functional/computational level: given a Turing machine, or neural network, the right ‘code’ will produce consciousness. But the problem is that this doesn’t lead to a theory which can ‘compile’ to physics. Consider the following:

Imagine you have a bag of popcorn. Now shake it. There will exist a certain ad-hoc interpretation of bag-of-popcorn-as-computational-system where you just simulated someone getting tortured, and other interpretations that don’t imply that. Did you torture anyone? If you’re a computationalist, no clear answer exists- you both did, and did not, torture someone. This sounds like a ridiculous edge-case that would never come up in real life, but in reality it comes up all the time, since there is no principled way to *objectively derive* what computation(s) any physical system is performing.

I don’t think this is an outlandish view of functionalism; Brian suggests much the same in How to Interpret a Physical System as a Mind“Physicalist views that directly map from physics to moral value are relatively simple to understand. Functionalism is more complex, because it maps from physics to computations to moral value. Moreover, while physics is real and objective, computations are fictional and ‘observer-relative’ (to use John Searle’s terminology). There’s no objective meaning to ‘the computation that this physical system is implementing’ (unless you’re referring to the specific equations of physics that the system is playing out).”

Gordon McCabe (McCabe 2004) provides a more formal argument to this effect— that precisely mapping between physical processes and (Turing-level) computational processes is inherently impossible— in the context of simulations. First, McCabe notes that:

[T]here is a one-[to-]many correspondence between the logical states [of a computer] and the exact electronic states of computer memory. Although there are bijective mappings between numbers and the logical states of computer memory, there are no bijective mappings between numbers and the exact electronic states of memory.

This lack of an exact bijective mapping means that subjective interpretation necessarily creeps in, and so a computational simulation of a physical system can’t be ‘about’ that system in any rigorous way:

In a computer simulation, the values of the physical quantities possessed by the simulated system are represented by the combined states of multiple bits in computer memory. However, the combined states of multiple bits in computer memory only represent numbers because they are deemed to do so under a numeric interpretation. There are many different interpretations of the combined states of multiple bits in computer memory. If the numbers represented by a digital computer are interpretation-dependent, they cannot be objective physical properties. Hence, there can be no objective relationship between the changing pattern of multiple bit-states in computer memory, and the changing pattern of quantity-values of a simulated physical system.

McCabe concludes that, metaphysically speaking,

A digital computer simulation of a physical system cannot exist as, (does not possess the properties and relationships of), anything else other than a physical process occurring upon the components of a computer. In the contemporary case of an electronic digital computer, a simulation cannot exist as anything else other than an electronic physical process occurring upon the components and circuitry of a computer.

Where does this leave ethics? In Flavors of Computation Are Flavors of Consciousness, Brian notes that “In some sense all I’ve proposed here is to think of different flavors of computation as being various flavors of consciousness. But this still leaves the question: Which flavors of computation matter most? Clearly whatever computations happen when a person is in pain are vastly more important than what’s happening in a brain on a lazy afternoon. How can we capture that difference?”

But if Brian grants the former point- that “There’s no objective meaning to ‘the computation that this physical system is implementing’”– then this latter task of figuring out “which flavors of computation matter most” is provably impossible. There will always be multiple computational (and thus ethical) interpretations of a physical system, with no way to figure out what’s “really” happening. No way to figure out if something is suffering or not. No consilience; not now, not ever.

Note: despite apparently granting the point above, Brian also remarks that:

I should add a note on terminology: All computations occur within physics, so any computation is a physical process. Conversely, any physical process proceeds from input conditions to output conditions in a regular manner and so is a computation. Hence, the set of computations equals the set of physical processes, and where I say “computations” in this piece, one could just as well substitute “physical processes” instead.

This seems to be (1) incorrect, for the reasons I give above, or (2) taking substantial poetic license with these terms, or (3) referring to hypercomputation (which might be able to salvage the metaphor, but would invalidate many of FRI’s conclusions dealing with the computability of suffering on conventional hardware).

This objection may seem esoteric or pedantic, but I think it’s important, and that it ripples through FRI’s theoretical framework with disastrous effects.

 

Objection 7: FRI doesn’t fully bite the bullet on computationalism

Brian suggests that “flavors of computation are flavors of consciousness” and that some computations ‘code’ for suffering. But if we do in fact bite the bullet on this metaphor and place suffering within the realm of computational theory, we need to think in “near mode” and accept all the paradoxes that brings. Scott Aaronson, a noted expert on quantum computing, raises the following objections to functionalism:

I’m guessing that many people in this room side with Dennett, and (not coincidentally, I’d say) also with Everett. I certainly have sympathies in that direction too. In fact, I spent seven or eight years of my life as a Dennett/Everett hardcore believer. But, while I don’t want to talk anyone out of the Dennett/Everett view, I’d like to take you on a tour of what I see as some of the extremely interesting questions that that view leaves unanswered. I’m not talking about “deep questions of meaning,” but about something much more straightforward: what exactly does a computational process have to do to qualify as “conscious”?

 

 

There’s this old chestnut, what if each person on earth simulated one neuron of your brain, by passing pieces of paper around. It took them several years just to simulate a single second of your thought processes. Would that bring your subjectivity into being? Would you accept it as a replacement for your current body? If so, then what if your brain were simulated, not neuron-by-neuron, but by a gigantic lookup table? That is, what if there were a huge database, much larger than the observable universe (but let’s not worry about that), that hardwired what your brain’s response was to every sequence of stimuli that your sense-organs could possibly receive. Would that bring about your consciousness? Let’s keep pushing: if it would, would it make a difference if anyone actually consulted the lookup table? Why can’t it bring about your consciousness just by sitting there doing nothing?

To these standard thought experiments, we can add more. Let’s suppose that, purely for error-correction purposes, the computer that’s simulating your brain runs the code three times, and takes the majority vote of the outcomes. Would that bring three “copies” of your consciousness into being? Does it make a difference if the three copies are widely separated in space or time—say, on different planets, or in different centuries? Is it possible that the massive redundancy taking place in your brain right now is bringing multiple copies of you into being?

 

 

Maybe my favorite thought experiment along these lines was invented by my former student Andy Drucker.  In the past five years, there’s been a revolution in theoretical cryptography, around something called Fully Homomorphic Encryption (FHE), which was first discovered by Craig Gentry.  What FHE lets you do is to perform arbitrary computations on encrypted data, without ever decrypting the data at any point.  So, to someone with the decryption key, you could be proving theorems, simulating planetary motions, etc.  But to someone without the key, it looks for all the world like you’re just shuffling random strings and producing other random strings as output.

 

You can probably see where this is going.  What if we homomorphically encrypted a simulation of your brain?  And what if we hid the only copy of the decryption key, let’s say in another galaxy?  Would this computation—which looks to anyone in our galaxy like a reshuffling of gobbledygook—be silently producing your consciousness?

 

When we consider the possibility of a conscious quantum computer, in some sense we inherit all the previous puzzles about conscious classical computers, but then also add a few new ones.  So, let’s say I run a quantum subroutine that simulates your brain, by applying some unitary transformation U.  But then, of course, I want to “uncompute” to get rid of garbage (and thereby enable interference between different branches), so I apply U-1.  Question: when I apply U-1, does your simulated brain experience the same thoughts and feelings a second time?  Is the second experience “the same as” the first, or does it differ somehow, by virtue of being reversed in time? Or, since U-1U is just a convoluted implementation of the identity function, are there no experiences at all here?

 

Here’s a better one: many of you have heard of the Vaidman bomb.  This is a famous thought experiment in quantum mechanics where there’s a package, and we’d like to “query” it to find out whether it contains a bomb—but if we query it and there is a bomb, it will explode, killing everyone in the room.  What’s the solution?  Well, suppose we could go into a superposition of querying the bomb and not querying it, with only ε amplitude on querying the bomb, and √(1-ε2) amplitude on not querying it.  And suppose we repeat this over and over—each time, moving ε amplitude onto the “query the bomb” state if there’s no bomb there, but moving ε2 probability onto the “query the bomb” state if there is a bomb (since the explosion decoheres the superposition).  Then after 1/ε repetitions, we’ll have order 1 probability of being in the “query the bomb” state if there’s no bomb.  By contrast, if there is a bomb, then the total probability we’ve ever entered that state is (1/ε)×ε2 = ε.  So, either way, we learn whether there’s a bomb, and the probability that we set the bomb off can be made arbitrarily small.  (Incidentally, this is extremely closely related to how Grover’s algorithm works.)

 

OK, now how about the Vaidman brain?  We’ve got a quantum subroutine simulating your brain, and we want to ask it a yes-or-no question.  We do so by querying that subroutine with ε amplitude 1/ε times, in such a way that if your answer is “yes,” then we’ve only ever activated the subroutine with total probability ε.  Yet you still manage to communicate your “yes” answer to the outside world.  So, should we say that you were conscious only in the ε fraction of the wavefunction where the simulation happened, or that the entire system was conscious?  (The answer could matter a lot for anthropic purposes.)

To sum up: Brian’s notion that consciousness is the same as computation raises more issues than it solves; in particular, the possibility that if suffering is computable, it may also be uncomputable/reversible, would suggest s-risks aren’t as serious as FRI treats them.

Objection 8: Dangerous combination

Three themes which seem to permeate FRI’s research are:

(1) Suffering is the thing that is bad.

(2) It’s critically important to eliminate badness from the universe.

(3) Suffering is impossible to define objectively, and so we each must define what suffering means for ourselves.

Taken individually, each of these seems reasonable. Pick two, and you’re still okay. Pick all three, though, and you get A Fully General Justification For Anything, based on what is ultimately a subjective/aesthetic call.

Much can be said in FRI’s defense here, and it’s unfair to single them out as risky: in my experience they’ve always brought a very thoughtful, measured, cooperative approach to the table. I would just note that ideas are powerful, and I think theme (3) is especially pernicious if incorrect.

III. QRI’s alternative

Analytic functionalism is essentially a negative hypothesis about consciousness: it’s the argument that there’s no order to be found, no rigor to be had. It obscures this with talk of “function”, which is a red herring it not only doesn’t define, but admits is undefinable. It doesn’t make any positive assertion. Functionalism is skepticism- nothing more, nothing less.

But is it right?

Ultimately, I think these a priori arguments are much like people in the middle ages arguing whether one could ever formalize a Proper System of Alchemy. Such arguments may in many cases hold water, but it’s often difficult to tell good arguments apart from arguments where we’re just cleverly fooling ourselves. In retrospect, the best way to *prove* systematized alchemy was possible was to just go out and *do* it, and invent Chemistry. That’s how I see what we’re doing at QRI with Qualia Formalism: we’re assuming it’s possible to build stuff, and we’re working on building the object-level stuff.

What we’ve built with QRI’s framework

Note: this is a brief, surface-level tour of our research; it will probably be confusing for readers who haven’t dug into our stuff before. Consider this a down-payment on a more substantial introduction.

My most notable work is Principia Qualia, in which I lay out my meta-framework for consciousness (a flavor of dual-aspect monism, with a focus on Qualia Formalism) and put forth the Symmetry Theory of Valence (STV). Essentially, the STV is an argument that much of the apparent complexity of emotional valence is evolutionarily contingent, and if we consider a mathematical object isomorphic to a phenomenological experience, the mathematical property which corresponds to how pleasant it is to be that experience is the object’s symmetry. This implies a bunch of testable predictions and reinterpretations of things like what ‘pleasure centers’ do (Section XI; Section XII). Building on this, I offer the Symmetry Theory of Homeostatic Regulation, which suggests understanding the structure of qualia will translate into knowledge about the structure of human intelligence, and I briefly touch on the idea of Neuroacoustics.

Likewise, my colleague Andrés Gómez Emilsson has written about the likely mathematics of phenomenology, including The Hyperbolic Geometry of DMT Experiences, Tyranny of the Intentional Object, and Algorithmic Reduction of Psychedelic States. If I had to suggest one thing to read in all of these links, though, it would be the transcript of his recent talk on Quantifying Bliss, which lays out the world’s first method to objectively measure valence from first principles (via fMRI) using Selen Atasoy’s Connectome Harmonics framework, the Symmetry Theory of Valence, and Andrés’s CDNS model of experience.

These are risky predictions and we don’t yet know if they’re right, but we’re confident that if there is some elegant structure intrinsic to consciousness, as there is in many other parts of the natural world, these are the right kind of risks to take.

I mention all this because I think analytic functionalism- which is to say radical skepticism/eliminativism, the metaphysics of last resort- only looks as good as it does because nobody’s been building out any alternatives.

IV. Closing thoughts

FRI is pursuing a certain research agenda, and QRI is pursuing another, and there’s lots of value in independent explorations of the nature of suffering. I’m glad FRI exists, everybody I’ve interacted with at FRI has been great, I’m happy they’re focusing on s-risks, and I look forward to seeing what they produce in the future.

On the other hand, I worry that nobody’s pushing back on FRI’s metaphysics, which seem to unavoidably lead to the intractable problems I describe above. FRI seems to believe these problems are part of the territory, unavoidable messes that we just have to make philosophical peace with. But I think that functionalism is a bad map, that the metaphysical messes it leads to are much worse than most people realize (fatal to FRI’s mission), and there are other options that avoid these problems (which, to be fair, is not to say they have no problems).

Ultimately, FRI doesn’t owe me a defense of their position. But if they’re open to suggestions on what it would take to convince a skeptic like me that their brand of functionalism is viable, or at least rescuable, I’d offer the following:

Re: Objection 1 (motte-and-bailey), I suggest FRI should be as clear and complete as possible in their basic definition of suffering. In which particular ways is it ineffable/fuzzy, and in which particular ways is it precise? What can we definitely say about suffering, and what can we definitely never determine? Preregistering ontological commitments and methodological possibilities would help guard against FRI’s definition of suffering changing based on context.

Re: Objection 2 (intuition duels), FRI may want to internally “war game” various future scenarios involving AGI, WBE, etc, with one side arguing that a given synthetic (or even extraterrestrial) organism is suffering, and the other side arguing that it isn’t. I’d expect this would help diagnose what sorts of disagreements future theories of suffering will need to adjudicate, and perhaps illuminate implicit ethical intuitions. Sharing the results of these simulated disagreements would also be helpful in making FRI’s reasoning less opaque to outsiders, although making everything transparent could lead to certain strategic disadvantages.

Re: Objection 3 (convergence requires common truth), I’d like FRI to explore exactly what might drive consilience/convergence in theories of suffering, and what precisely makes one theory of suffering better than another, and ideally to evaluate a range of example theories of suffering under these criteria.

Re: Objection 4 (assuming that consciousness is a reification produces more confusion, not less), I would love to see a historical treatment of reification: lists of reifications which were later dissolved (e.g., élan vital), vs scattered phenomena that were later unified (e.g., electromagnetism). What patterns do the former have, vs the latter, and why might consciousness fit one of these buckets better than the other?

Re: Objection 5 (the Hard Problem of Consciousness is a red herring), I’d like to see a more detailed treatment of what kinds of problem people have interpreted the Hard Problem as, and also more analysis on the prospects of Qualia Formalism (which I think is the maximally-empirical, maximally-charitable interpretation of the Hard Problem). It would be helpful for us, in particular, if FRI preregistered their expectations about QRI’s predictions, and their view of the relative evidence strength of each of our predictions.

Re: Objection 6 (mapping to reality), this is perhaps the heart of most of our disagreement. From Brian’s quotes, he seems split on this issue; I’d like clarification about whether he believes we can ever precisely/objectively map specific computations to specific physical systems, and vice-versa. And if so— how? If not, this seems to propagate through FRI’s ethical framework in a disastrous way, since anyone can argue that any physical system does, or does not, ‘code’ for massive suffering, and there’s no principled way to derive any ‘ground truth’ or even pick between interpretations in a principled way (e.g. my popcorn example). If this isn’t the case— why not?

Brian has suggested that “certain high-level interpretations of physical systems are more ‘natural’ and useful than others” (personal communication); I agree, and would encourage FRI to explore systematizing this.

It would be non-trivial to port FRI’s theories and computational intuitions to the framework of “hypercomputation”– i.e., the understanding that there’s a formal hierarchy of computational systems, and that Turing machines are only one level of many– but it may have benefits too. Namely, it might be the only way they could avoid Objection 6 (which I think is a fatal objection) while still allowing them to speak about computation & consciousness in the same breath. I think FRI should look at this and see if it makes sense to them.

Re: Objection 7 (FRI doesn’t fully bite the bullet on computationalism), I’d like to see responses to Aaronson’s aforementioned thought experiments.

Re: Objection 8 (dangerous combination), I’d like to see a clarification about why my interpretation is unreasonable (as it very well may be!).

 


In conclusion- I think FRI has a critically important goal- reduction of suffering & s-risk. However, I also think FRI has painted itself into a corner by explicitly disallowing a clear, disagreement-mediating definition for what these things are. I look forward to further work in this field.

 

Mike Johnson

Qualia Research Institute


Acknowledgements: thanks to Andrés Gómez Emilsson, Brian Tomasik, and Max Daniel for reviewing earlier drafts of this.

Sources:

My sources for FRI’s views on consciousness:
Flavors of Computation are Flavors of Consciousness:
https://foundational-research.org/flavors-of-computation-are-flavors-of-consciousness/
Is There a Hard Problem of Consciousness?
http://reducing-suffering.org/hard-problem-consciousness/
Consciousness Is a Process, Not a Moment
http://reducing-suffering.org/consciousness-is-a-process-not-a-moment/
How to Interpret a Physical System as a Mind
http://reducing-suffering.org/interpret-physical-system-mind/
Dissolving Confusion about Consciousness
http://reducing-suffering.org/dissolving-confusion-about-consciousness/
Debate between Brian & Mike on consciousness:
https://www.facebook.com/groups/effective.altruists/permalink/1333798200009867/?comment_id=1333823816673972&comment_tracking=%7B%22tn%22%3A%22R9%22%7D
Max Daniel’s EA Global Boston 2017 talk on s-risks:
https://www.youtube.com/watch?v=jiZxEJcFExc
Multipolar debate between Eliezer Yudkowsky and various rationalists about animal suffering:
https://rationalconspiracy.com/2015/12/16/a-debate-on-animal-consciousness/
The Internet Encyclopedia of Philosophy on functionalism:
http://www.iep.utm.edu/functism/
Gordon McCabe on why computation doesn’t map to physics:
http://philsci-archive.pitt.edu/1891/1/UniverseCreationComputer.pdf
Toby Ord on hypercomputation, and how it differs from Turing’s work:
https://arxiv.org/abs/math/0209332
Luke Muehlhauser’s OpenPhil-funded report on consciousness and moral patienthood:
http://www.openphilanthropy.org/2017-report-consciousness-and-moral-patienthood
Scott Aaronson’s thought experiments on computationalism:
http://www.scottaaronson.com/blog/?p=1951
Selen Atasoy on Connectome Harmonics, a new way to understand brain activity:
https://www.nature.com/articles/ncomms10340
My work on formalizing phenomenology:
My meta-framework for consciousness, including the Symmetry Theory of Valence:
http://opentheory.net/PrincipiaQualia.pdf
My hypothesis of homeostatic regulation, which touches on why we seek out pleasure:
http://opentheory.net/2017/05/why-we-seek-out-pleasure-the-symmetry-theory-of-homeostatic-regulation/
My exploration & parametrization of the ‘neuroacoustics’ metaphor suggested by Atasoy’s work:
http://opentheory.net/2017/06/taking-brain-waves-seriously-neuroacoustics/
My colleague Andrés’s work on formalizing phenomenology:
A model of DMT-trip-as-hyperbolic-experience:
https://qualiacomputing.com/2017/05/28/eli5-the-hyperbolic-geometry-of-dmt-experiences/
June 2017 talk at Consciousness Hacking, describing a theory and experiment to predict people’s valence from fMRI data:
https://qualiacomputing.com/2017/06/18/quantifying-bliss-talk-summary/
A parametrization of various psychedelic states as operators in qualia space:
https://qualiacomputing.com/2016/06/20/algorithmic-reduction-of-psychedelic-states/
A brief post on valence and the fundamental attribution error:
https://qualiacomputing.com/2016/11/19/the-tyranny-of-the-intentional-object/
A summary of some of Selen Atasoy’s current work on Connectome Harmonics:
https://qualiacomputing.com/2017/06/18/connectome-specific-harmonic-waves-on-lsd/

The Most Important Philosophical Question

Albert Camus famously claimed that the most important philosophical question in existence was whether to commit suicide. I would disagree.

For one, if Open Individualism is true (i.e. that deep down we are all one and the same consciousness) then ending one’s life will not accomplish much. The vast majority of “who you are” will remain intact, and if there are further problems to be solved, and questions to be answered, doing this will simply delay your own progress. So at least from a certain point of view one could argue that the most important question is, instead, the question of personal identity. I.e. Are you, deep down, an individual being who starts existing when you are born and stops existing when you die (Closed Individualism), something that exists only for a single time-slice (Empty Individualism), or maybe something that is one and the same with the rest of the universe (Open Individualism)?

I think that is a very important question. But probably not the most important one. Instead, I’d posit that the most important question is: “What is good, and is there a ground truth about it?”

In the case that we are all one consciousness maybe what’s truly good is whatever one actually truly values from a first-person point of view (being mindful, of course, of the deceptive potential that comes from the Tyranny of the Intentional Object). And in so far as this has been asked, I think that there are two remaining possibilities: Does ultimate value come down to the pleasure-pain axis, or does it come down to spiritual wisdom?

Thus, in this day and age, I’d argue that the most important philosophical (and hence most important, period) question is: “Is happiness a spiritual trick, or is spirituality a happiness trick?”

What would it mean for happiness to be a spiritual trick? Think, for example, of the possibility that the reason why we exist is because we are all God, and God would be awfully bored if It knew that It was all that ever existed. In such a case, maybe bliss and happiness comes down to something akin to “Does this particular set of life experiences make God feel less lonely”? Alternatively, maybe God is “divinely self-sufficient”, as some mystics claim, and all of creation is “merely a plus on top of God”. In this case one could think that God is the ultimate source of all that is good, and thus bliss may be synonymous with “being closer to God”. In turn, as mystics have claimed over the ages, the whole point of life is to “get closer to God”.

Spirituality, though, goes beyond God: Within (atheistic) Buddhism the view that “bliss is a spiritual trick” might take another form: Bliss is either “dirty and a sign of ignorance” (as in the case of karma-generating pleasure) or it is “the results of virtuous merit conducive to true unconditioned enlightenment“. Thus, the whole point of life would be to become free from ignorance and reap the benefits of knowing the ultimate truth.

And what would it mean for spirituality to be a happiness trick? In this case one could imagine that our valence (i.e. our pleasure-pain axis) is a sort of qualia variety that evolution recruited in order to infuse the phenomenal representation of situations that predict either higher or lower chances of making copies of oneself (or spreading one’s genes, in the more general case of “inclusive fitness”). If this is so, it might be tempting to think that bliss is, ultimately, not something that “truly matters”. But this would be to think that bliss is “nothing other than the function that bliss plays in animal behavior”, which couldn’t be further from the truth. After all, the same behavior could be enacted by many methods. Instead, the raw phenomenal character of bliss reveals that “something matters in this universe”. Only people who are anhedonic (or are depressed) will miss the fact that “bliss matters”. This is self-evident and self-intimating to anyone currently experiencing ecstatic rapture. In light of these experiences we can conclude that if anything at all does matter, it has to do with the qualia varieties involved in the experiences that feel like the world has meaning. The pleasure-pain axis makes our existence significant.

Now, why do I think this is the most important question? IF we discover that happiness is a spiritual trick and that God is its source then we really ought to follow “the spiritual path” and figure out with science “what is it that God truly wants”. And under an atheistic brand of spirituality, what we ought to figure out is the laws of valence-charged spiritual energy. For example, if reincarnation and karma are involved in the expected amount of future bliss and suffering, so be it. Let’s all become Bodhisattvas and help as many sentient beings as possible throughout the eons to come.

On the other hand, IF we discover (and can prove with a good empirical argument) that spirituality is just the result of changes in valence/happiness, then settling on this with a high certainty would change the world. For starters, any compassionate (and at least mildly rational) Buddhist would then come along and help us out in the pursuit of creating a pan-species welfare state free of suffering with the use of biotechnology. I.e. The 500 odd million Buddhists world-wide would be key allies for the Hedonistic Imperative (a movement that aims to eliminate suffering with biotechnology).

Recall Dalai Lama’s quote: “If it was possible to become free of negative emotions by a riskless implementation of an electrode – without impairing intelligence and the critical mind – I would be the first patient.” [Dalai Lama (Society for Neuroscience Congress, Nov. 2005)].

If Buddhist doctrine concerning the very nature of suffering and its causes is wrong from a scientific point of view and we can prove it with an empirically verified physicalist paradigm, then the very Buddhist ethic of “focusing on minimizing suffering” ought to compel Buddhists throughout the world to join us in the battle against suffering by any means necessary. And most likely, given the physicalist premise, this would take the form of creating a technology that puts us all in a perpetual pro-social clear-headed non-addictive MDMA-like state of consciousness (or, in a more sophisticated vein, a well-balanced version of rational wire-heading).

Psychedelic Science 2017: Take-aways, impressions, and what’s next

 

It would be impossible for me to summarize what truly went on at Psychedelic Science 2017. Since giving a fair and detailed account of all of the presentations, workshops and social events I attended is out of the question, I will restrict myself to talking about, what I see as, the core insights and take-aways from the conference (plus some additional impressions I’ll get to). In brief, the core insights are: (1) that we are on the brink of a culturally-accepted scientific revolution on the study of consciousness in which we finally navigate our way out of our current pre-Galilean understanding of the mind, (2) that the breakdown of both the extremes of nihilism and eternalism as ideological north-stars in consciousness research is about to take place (i.e. finding out that neither scientific materialism nor spirituality convey the full picture), and (3) that a new science of valence, qualia, and rational psychonautics based on the quantification of good and bad feelings is slowly making its way into the surface.

With regards to (1): It should not come as a surprise to anyone who has been paying attention that there is a psychedelic renaissance underway. Bearing extreme world-wide counter-measures against it, in so far as psychedelic and empathogenic compounds meet the required evidentiary standards of mainstream psychopharmacology as safe and effective treatments for mental illness (and they do), they will be a staple of tomorrow’s tools for mental health. It’s not a difficult gamble: the current studies being made around the world are merely providing the scientific backing of what was already known in the 60s (for psychedelics) and 80s (for MDMA). I.e. That psychedelic medicine (people love to call it that way) in the right set and setting produces outstanding clinically-relevant effect sizes.

On (2): it is very unclear what people who attended the conference believe about the nature of reality, but overall there was a strong Open Individualist undercurrent and a powerful feeling that transcendence is right next door (even the urinals had sacred geometry*). That said, the science provided a refreshing feeling of cautious nihilism. Trying to reconcile both love and science is, in my opinion, the way to go. Whether we are about to ascend to another realm or if we are about to find out about our cosmic meaninglessness, the truth is that there are a lot of more immediate things to worry about. Arguably, psychedelic experiences could be used to treat both the afflictions that come with eternalism as well as those that come from nihilism. Namely, psychedelics often make you experience the world as you believe it to be (echoing John C. Lilly’s famous words: “In the province of the mind, what one believes to be true is true or becomes true, within certain limits to be found experientially and experimentally. These limits are further beliefs to be transcended. In the mind, there are no limits…”). So if you rely on intense (but mundanely challenged) feelings of transcendence to get by, you may find out on a psychedelic experience that making a world in which what you believe is literally true does not lead to happiness and meaningfulness in the way you thought it might. Unless, of course, one believes that everything that happens is a net positive somehow (which is hard to do given the regular onslaught of meaninglessness found in everyday life), any profound realization of an ontological basis of reality (as in “a made up universe perceived as if real”) can lead to dysphoria. Nihilism can be profoundly distressing on psychedelics. Yet, as evidenced by the bulk of conscious experiences, the quality of meaningfulness in one’s experience is a continuum, neither objective nor subjective, and neither eternal nor unreal (I’m using the terminology from the book “Meaningness“, though other terminologies exist for similar concepts such as the Buddhist “middle way”, Existentialism, Pragmatism, Rationalists’ epistemic rationality, etc.).

Psychedelic veterans usually end up converging on something that has this sort of emotional texture: A bitter-sweet yet Stoic worldview that leaves an open space for all kinds of wonderful things to happen, yet remains aware of the comings and goings of happiness and fulfillment. It makes it a point to not be too preoccupied with questions of ultimate meaning. It may be that for most people it’s impossible to arrive at such wisdom without trying out (and failing in some way) to live all of their fantasies before giving up and accepting the fluxing nature of reality. In such a case, psychedelics would seem to offer us a way to accelerate our learning about the unsatisfactoriness of attachments and find the way to live in realistic joy.

That said, maybe such wisdom is not Wisdom (in the sense of being universal) since we are restricting our analysis to the human wetware as it is today.  What reason do we have to believe that the hedonic treadmill is a fundamental property of the universe? A lot of evidence suggests persistent differences in people’s hedonic set-point (often genetically influenced, as in the case of the SCN9A gene for pain thresholds), and this challenges the notion that we can’t avoid suffering. Indeed, MDMA-like states may some day be experienced at will with the use of technology (and without side effects). There may even be scientifically-derived precision-engineered ethical and freedom-expanding wireheading technology that will make our current everyday way of life look laughably uninteresting and unmeaningful in comparison.

Unfortunately, talking about this (i.e. technologically-induced hedonic recalibration) with people who need a pessimistic metaphysics of valence just to function may be considered antisocial. For example, some people seem to need spiritual theories of the pleasure-pain axis that focus on fairness (such as the doctrine of Karma) in order to feel like they are not randomly getting the shorter end of the (cosmic) stick (this sentiment usually comes together with implicit Closed Individualist convictions). Of course feeling like one is a victim is itself the result of one’s affect. This provides the perfect segway for (3):

In addition to all of the magical (but expected) fusion of art, psychotherapy, mysticism, spirituality and self-hacking that this conference attracted, I was extremely delighted to see the hints of what I think will change the world for the better like nothing else will: psychedelic valence research.

Of particular note is the work of Dráulio Barros de Araújo (“Rapid Antidepressant Effects of the Psychedelic Ayahuasca in Treatment-Resistant Depression: A Randomized Placebo-Controlled Trial”), Mendel Kaelen (“The Psychological and Neurophysiological Effects of Music in Combination with Psychedelics”), Leor Roseman (“Psilocybin-Assisted Therapy: Neural Changes and the Relationship Between Acute Peak Experience and Clinical Outcomes”), Jordi Riba (“New Findings from Ayahuasca Research: From Enhancing Mindfulness Abilities to Promoting Neurogenesis”), Selen Atasoy (“Enhanced Improvisation in Brain Processing by LSD: Exploring Neural Correlates of LSD Experience With Connectome-Specific Harmonic Waves”), Tomas Palenicek (“The Effects of Psilocybin on Perception and Dynamics of Induced EEG/fMRI Correlates of Psychedelic Experience”) and Clare Wikins (“A Novel Approach to Detoxification from Methadone Using Low, Repeated, and Cumulative Administering of Ibogaine”).

And of all of these, Selen Atasoy‘s work seems to be hitting the nail in the head the most: Her work involves looking into how psychedelics affect the overall amount of energy that each of the brain’s discrete connectome-specific resonant states has. Without giving it away (their work with LSD is still unpublished) let me just say that they found that having some extra energy in specific harmonics was predictive of the specific psychedelic effects experienced at a given point in time (including things such as emotional arousal, deeply felt positive mood, and ego dissolution).

Remarkably, this line of work is in agreement with Mike Johnson’s theoretical framework for the study of valence (as outlined in Principia Qualia). Namely, that there is a deep connection between harmony, symmetry and valence that will make sense once we figure out the mathematical structure whose formal properties are isomorphic to a subject’s phenomenology. In particular, “Valence Structuralism” would seem to be supported by the findings that relatively pure harmonic states are experienced as positive emotion. We would further predict that very pure harmonic states would have the highest level of (positive) hedonic tone (i.e. bliss). We are indeed very intrigued by the connectome-specific harmonic approach to psychedelic research and look forward to working with this paradigm in the future. It would be an understatement to say that we are also excited to see the results of applying this paradigm to study MDMA-like states of consciousness. This line of research is, above all, what makes me think that this year is the Year of Qualia (whether we have realized it or not). As it were, we are seeing the first hints of a future science of consciousness that can finally provide quantitative predictions about valence, and hence, become the first scientifically-compliant theory of ultimate value.

And now some subjective impressions about the conference…

Impressions

Psychedelic Ambiance

At its core, the conference felt extremely psychedelic in its own right. The artwork, people’s attires, the scents, the background music, etc. were all what seemed to me like expressions of an emerging style of psychedelic ambiance: A euphoric blend of MDMA-like self-assurant empathegenesis vibes (“everything will be ok”) with an LSD-like ontological sabotage to the ego entheoblasting vibes of universal oneness (“things are not what they seem/everything already always and never has happened at the same time”). Peak experiences, after all, often involve the metaphorical reconciliation of the divine and the mundane in a cosmic dance of meaning.

The Gods

In his book “Simulations of God” John C. Lilly proposes that beneath the surface of our awareness, each and every mind worships a number of seemingly transcendental values (sometimes, but not always, explicitly personified). Whether we know it or not, he argues, each and every one of us treats as if a God at least something. Whether we think there there is a “God Out There”, that “Truth is the Ultimate God”, or that “God Is The Group”, the highest node in our behavioral hierarchy is always covertly managed by our basic assumptions about reality (and what they prescribe as “intrinsically good”). The book’s table of contents is awesome; it outlines what ends up being the bulk of what humans ever care about as their ultimate values:

  1. God As the Beginning
  2. I Am God
  3. God Out There
  4. God As Him/Her/It
  5. God As The Group
  6. God As Orgasm and Sex
  7. God As Death
  8. God As Drugs
  9. God As the Body
  10. God As Money
  11. God As Righteous Wrath
  12. God As Compassion
  13. God As War
  14. God As Science
  15. God As Mystery
  16. God As the Belief, the Simulation, the Model
  17. God As the Computer
  18. God Simulating Himself
  19. God As Consciousness-without-an-Object
  20. God As Humor
  21. God As Superspace, the Ultimate Collapse into the Black Hole, the End.
  22. The Ultimate Simulation
  23. God As the Diad

Perhaps what’s most amazing about psychedelics is that they are capable of changing one’s Gods. It’s extremely common for people who take psychedelics to de-emphasize traditionalist and mainstream Gods such as 1, 3, 5, 7, 10, 11, 13, while also having experiences (and changes of mind) that push them to emphasize 2, 6, 8, 12, 14, 15, 16, 17, 18, 19, 21, 22, and 23. But one wonders, what’s the eventual steady-state? As an umbrella description of what is going on we could say that psychedelics make you more open. But where does this, ultimately, lead?

Perhaps you started out in a conservative household and a family that emphasized loyalty to the group, conformism, nationalism and traditional religious values (1, 3, 5, 7). But once you tried LSD you felt a great change in the strength of your various deep-seated inclinations. You realize that you do not want to worship anything just to fit in, just to be part of a group, and that maybe caring about money is not as important as caring about making your own meaning out of life. You now feel like you care more about mysterious things like Orgasm (6), the Mind-Body connection (9), and philosophical questions like “If I am God, why would I build a universe with suffering in it?” (2, 15, 16, 21). You maybe watch some lectures by Alan Watts and read a book by Huxley, among other counter-culture material consumed, and you might start to develop a general belief in “the transcendent” but in a way that attempts to be compatible with the fact that you and the people you love experience suffering. You fantasize with the idea that maybe all of suffering is somehow necessary for some higher cosmic purpose (18, 19, 22) to which you are only made privy every now and then. You then continue on the path of psychedelic divination, perhaps taking more than you could handle here and there, and you are made aware of incredible universes: you meet guardians, you are led to read about Theosophy, you meet archetypes of the collective psyche, and after a while your strange experience with electronic equipment on LSD makes you wonder whether telepathy (at least an energetic and emotional variant of it) could be possible after all. But you do not ever obtain “good enough evidence” that would convince anyone who is determined to be a skeptic of your glitches of the Matrix. At some point, after taking too many magic mushrooms, you end up in what seems like a sort of Buddhist Hell: Feeling like we are all One no longer feels like a fact to be excited about, but rather, this is felt as a realization that should be forgotten as soon as one has it. Don’t let the cosmic boredom set in, don’t led nihilistic monism get to your very core. But it does, and you have a bad trip, one trip that you feel you never really recovered from, and whose nature is never talked about at psychedelic gatherings. (Don’t worry, right next door someone had a bad trip whose semantic content was the exact opposite of yours yet its effects on your corresponding valence landscapes were similar, e.g. concluding that “we are all made of atoms with no purpose” may feel just as bad as believing that “we are all God, and God is bored”). So maybe psychedelic therapy is a red herring after all, you think to yourself, and we should really be looking only into compounds that both increase euphoria and obfuscate the ultimate nature of reality at the same time. “Science, we need science” -you tell yourself- “so that we can figure out what it is that consciousness truly wants, and avoid both nihilistic bad trips as well as unrealistic eternalist mania”. Perhaps we are currently about to have to figure this out as a collective intelligence: “What do we do with the fact that we are all God?” This question is now making its way in etheric undercurrents in the shared meme-space of humanity just as the psychedelic renaissance starts to unfold.

The above paragraph is just one of the various archetypical ways in which psychedelic self-exploration may progress over time for a particular person. Of course not only do people’s progression vary; people’s starting points may be different. Some people approach psychedelics with spiritual intentions, others do so with recreation in mind, others use them for psychological self-exploration, and yet others use it to try to find glitches in reality. I would love to have a quantitative assessment of how one’s starting “implicit Gods” influence the way psychedelics affect you, and how such Gods evolve over the course of more exposure to psychedelic states of consciousness. There is a lot of wisdom-amplification research to be made on this front.

Psychedelic Gods

You’re only as young as the last time you changed your mind.

– Timothy Leary

The first thing I noticed at this conference was that this is a crowd that values both love and science. The geek in me seemed to be more than welcomed in here.

While I was able to enjoy the incredible vibe of the Bicycle Day celebration (just a day before the conference), I remember thinking that evolutionary psychology (cf. Mating Mind) would have a lot to say about it. A large proportion of seemingly selfless display of psychedelic self-sacrifice (e.g. LSD mega-dosing, spiritual training, asceticism, etc.) might in fact be just sexual signaling of fitness traits such as mental and physical robustness (cf. Algorithmic Reduction of Psychedelic StatesPolitical Peacocks). It’s hard to separate the universal love from the tribal mate-selection going on at raves and parties of this nature, and at times one may even get a bit of an anti-intellectual vibe for questioning this too deeply.

At the conference, though, I could tell there was another story going on. Namely, the God of Science made a prominent appearance, giving us all a sense of genuine progress beyond the comings and goings of the eternal game of hide-and-seek as one would expect in mere neo-hippy cyber-paganist events.

The God of Science… yes… if you think about it, holding an enriched concept of “science” (in its most expansive sense possible) while simultaneously trying to hold with equal intensity and expansiveness the intent of “love for all beings”, can make strange and wonderful things happen in your mind. Of salience is the fact that there will be an intense pull towards either only experiencing thought-forms about love or only focusing on thought-forms about science. Mixing the two requires a lot of energy. It’s almost as if we were wired to only focus on one at a time. This is an effect reminiscent to the mutual inhibition between empathizing and systematizing cognitive styles, and maybe at its core, the difficulty in blending both love and science without residue is a reflection of an underlying invariant. Under the assumption that you have a limited amount of positive valence at your disposal to paint your world simulation, and that you want to achieve clarity of mind, it is possible that you will have to front-load most of that positive valence in either broad quantitative observations (systematizing) or focused feelings of specialness and intimacy (empathizing). This is why, for instance, MDMA and 2C-B are so promising for cognitive transhumanism: these compounds can give rise to experiences in which there is a huge surplus of positive valence ready to be used to paint any aspect of your world simulation with bliss qualia. Sadly, this is a property of such states of consciousness, and it cannot currently be brought into our everyday lives as it is. Without serious genetic engineering (or other valence-enhancing technologies) all we can do for now is to make use of these states of consciousness to catalyze changes in our deep-seated existential stances in order to help us get by in our half-meaningful half-meaningless everyday life.

Of course, the Holy Grail of mental health interventions would be a technology that allows us to instantiate a context-dependent level of empathogenesis in a reliable and sustainable way. When I asked people at the conference whether they thought that having “a machine that makes you feel like you are on MDMA on demand with no tolerance, impulsivity, addiction or other side effects” would be good, most people (at least 80%) said “it would be bad for humanity to have such machine”. Why? Because they think that suffering serves a higher purpose, somehow. But I would disagree. And even if they are right, I still think that there are not enough people steel-manning the case for intelligent wire-heading. It’d be silly to find out in 2200 that we could have avoided hundreds of millions of people’s suffering at no cost to our collective growth if we only had thought more carefully about the intrinsic value of suffering back in 2050 when the MDMA-machine was invented and reflexively banned.

But healthy sustainable wire-heading (let alone wire-heading done right in light of evolution-at-the-limit scenarios) is many decades away into the future anyway. So all we have for now, by way of consciousness-expanding therapies for real-life knots-and-bolts treatment-resistant human suffering is the sort of therapy paradigms discussed in the conference. Of the roughly 135 conference talks (excluding parties, networking events, and workshops) at least 100 were either only or at least primarily focused on psychedelic therapy for mental illness (cancer end-of-life anxiety, PTSD, addiction, treatment-resistant depression, etc.). As far as a strategical cultural move, this focus on treatment is a very good approach, and from a valence utilitarian point of view maybe this is indeed what we should be focusing on in 2017. But I still wish that there was a bigger presence of some other kinds of discussion. In particular, I’d love for psychedelic science to eventually make a prominent appearance in a much wider context. Any discussion about the nature of consciousness from a scientific point of view cannot overlook the peculiar consciousness-enhancing properties of psychedelics. And any discussion about ethics, life and the purpose of it all will likewise be under-informed in so far as psychedelic peak-meaningful experiences are not brought into the conversation. After all, the ethical, philosophical, and scientific significance of psychedelics is hard to overstate.

Ideally we would all organize a conference that takes the best of: 1) A steadfast resolution to figure out the problem of consciousness, such as what we can find at places like The Science of Consciousness, 2) a steadfast resolution to combine both the best of compassion and rationality in order to help as many beings as possible, as we find in places like Effective Altruism Global, and 3) a steadfast resolution to look at the most impressive pieces of evidence about the nature of the mind and valence, as can be found in places like Psychedelic Science. All in all, this would be a perfect triad, as it would combine (1) The Question (Consciousness), (2) The Purpose (Ending Suffering), and (3) The Method (Scientific Study of Highly-Energetic States of Consciousness). Rest assured, the conferences organized by the Super-Shulgin Academy will blend these three aspects into one.

The Crowd

This was a very chill crowd. The only way for me to be edgy in the social contexts that arose at Psychedelic Science 2017 was to refuse to dab with the guy next to me (and to decline the Asparagus Butternut Squash edible offered at some point), or, at its worse, trying to spark a conversation about the benefits of well-managed opioid medication treatment for chronic pain (it was a rather opioid-phobic crowd, if I may say so myself).

On the other hand, talking about one’s experience in hyperbolic phenomenal spaces while on DMT, how to secretly communicate with people on LSD, and about the use of texture analysis and synthesis for psychophysical tasks to investigate psychedelic image processing barely raised anybody’s brows. I was happy to find that some people recognized me from Qualia Computing, and more than one of them shared the thought that it would be great to see more interbreeding and cross-fertilization between the psychedelic and the rationalist communities (I can’t agree more with this sentiment).

To give you a taste of the sort of gestalt present at this event, let me share with you something. Waiting on the line for one of the parties hosted by the conference organizers I overheard someone talking about what his ketamine experiences had taught him. Curious about it, I approached him and asked him to debrief me -if at all possible -about what he had learned. He said:

The super-intelligence that I’ve encountered on my ketamine experiences is far, far, beyond human comprehension, and its main message is that everything is interconnected; it does not matter when you hear the message, but that you hear it, and unconsciously prepare for what is going to happen. We are all soon going to be part of it, and we will all be together, knowing each other at a deeper level than we have ever thought imaginable, and experience love and meaning on another level, together in a vast interdimensional ecology of benevolent minds. All of the stories that we tell ourselves about the grand human narrative are all, well, made up by our minds on our limited human level. Whatever we are coming to, whatever this future thing that we are facing is, goes beyond human cravings for transcendence, it goes beyond the sentiment of return to nature, it goes beyond science and technology, and it goes beyond every religion and contemplative practice. The complexity to be found in the super-intelligent collective being that we will become is inexpressible, but there is nothing to fear, we are it on some level already, and we will soon all realize it.

It is hard to estimate what the distribution, prevalence and resilience of beliefs about the nature of reality, consciousness, love, purpose and everything else of the people attending this conference were. As a whole, it felt remarkably diverse, though. Based on my subjective impressions, I’d suspect that like the person quoted above, about 40% of the attendees were people who genuinely believe that there is a big consciousness event that is about to happen (whether it is a collective spiritual level-breaking point, a technological Singularity, inter-dimensional aliens taking us with them, or a more mundane run-of-the-mill recursively self-improving feedback loop with genetic methods for consciousness research). Maybe about 50% seemed to be what you might call pragmatic, agnostic, and open minded people who are simply looking to find out what’s up with the field, without spiritual (or emotional) vested interested in exactly what will happen. And finally, about 10% of the attendees might be classifiable as nihilists on some or another level. While intrigued about the effects of psychedelics, they see them as dead ends or red herrings. Perhaps useful for mental health, but not likely to be a key to reality (or even a hint of a future revolution in the states of consciousness we utilize on our everyday life).

Conclusion

I am very excited with the current movement to examine psychedelics in a rational scientific framework. Ultimately, I think that we will realize that valence is a quantifiable and definite thing (cf. Valence Structuralism). Wether we are talking about humor, pain relief, transcendence, or knots-and-bolts feelings of competence, all of our positive experiences share something in common. Ultimately, I do not know whether “valence is a spiritual trick” or if “spirituality is a valence trick”, but I am confident that as a species we do not yet have the answer to these questions and that a scientific approach to them may clarify this incredibly important line of inquiry.

Sooner or later, it seems to me, we will figure out what exactly “the universe wants from us”, so to speak, and then nothing will ever be the same; psychedelic research is a powerful and promising way to make good headway in this highly desirable direction.

 

 

 

IMG_20170421_212302

The look from the Sunset Cruise at the Psychedelic Science 2017 Conference


*Even the bathroom urinals seemed to have sacred geometry:

 

IMG_20170423_192257

Even the urinals had sacred geometry… reminding you of the interconnectedness of all things at the unlikeliest of moments.

How Every Fairy Tale Should End

“And even though the princess defeated the dragon and married the prince at the end of the story, the truth is that the hedonic treadmill and the 7-year itch eventually caught up to them and they were not able to ‘live happily ever after’.

“Thankfully, the princess got really interested in philosophy of mind and worked really hard on developing a theory of valence in order to ‘sabotage the mill’ of affective ups and downs, so to speak. After 10 years of hard work, three book-length series of blog posts, a well founded team of 17 rational psychonauts, and hundreds of experiments involving psychedelics and brain computer interfaces, at last the princess was able to create a portable device capable of measuring what amounts to a reasonable proxy for valence at an individual level in sub-second timescales, which over time enabled people to have reliable and sustainable control over the temporal dynamics of valence and arousal.

“Later on the prince developed a Moloch-aware and Singleton-proof economy of information about the state-space of consciousness, and thus kick-started the era of ethical wireheads; the world became a true fariy tale… a wondrous universe of enigmatic -but always blissful- varieties of ineffable qualia. After this came to pass, one could truly and sincerely say that the prince and the princess both became (functionally and phenomenally) happily ever after. The End.”

Political Peacocks

Extract from Geoffrey Miller’s essay “Political peacocks”

The hypothesis

Humans are ideological animals. We show strong motivations and incredible capacities to learn, create, recombine, and disseminate ideas. Despite the evidence that these idea-processing systems are complex biological adaptations that must have evolved through Darwinian selection, even the most ardent modern Darwinians such as Stephen Jay Gould, Richards Dawkins, and Dan Dennett tend to treat culture as an evolutionary arena separate from biology. One reason for this failure of nerve is that it is so difficult to think of any form of natural selection that would favor such extreme, costly, and obsessive ideological behavior. Until the last 40,000 years of human evolution, the pace of technological and social change was so slow that it’s hard to believe there was much of a survival payoff to becoming such an ideological animal. My hypothesis, developed in a long Ph.D. dissertation, several recent papers, and a forthcoming book, is that the payoffs to ideological behavior were largely reproductive. The heritable mental capacities that underpin human language, culture, music, art, and myth-making evolved through sexual selection operating on both men and women, through mutual mate choice. Whatever technological benefits those capacities happen to have produced in recent centuries are unanticipated side-effects of adaptations originally designed for courtship.

[…]

The predictions and implications

The vast majority of people in modern societies have almost no political power, yet have strong political convictions that they broadcast insistently, frequently, and loudly when social conditions are right. This behavior is puzzling to economists, who see clear time and energy costs to ideological behavior, but little political benefit to the individual. My point is that the individual benefits of expressing political ideology are usually not political at all, but social and sexual. As such, political ideology is under strong social and sexual constraints that make little sense to political theorists and policy experts. This simple idea may solve a number of old puzzles in political psychology. Why do hundreds of questionnaires show that men more conservative, more authoritarian, more rights-oriented, and less empathy-oriented than women? Why do people become more conservative as the move from young adulthood to middle age? Why do more men than women run for political office? Why are most ideological revolutions initiated by young single men?

None of these phenomena make sense if political ideology is a rational reflection of political self-interest. In political, economic, and psychological terms, everyone has equally strong self-interests, so everyone should produce equal amounts of ideological behavior, if that behavior functions to advance political self-interest. However, we know from sexual selection theory that not everyone has equally strong reproductive interests. Males have much more to gain from each act of intercourse than females, because, by definition, they invest less in each gamete. Young males should be especially risk-seeking in their reproductive behavior, because they have the most to win and the least to lose from risky courtship behavior (such as becoming a political revolutionary). These predictions are obvious to any sexual selection theorist. Less obvious are the ways in which political ideology is used to advertise different aspects of one’s personality across the lifespan.

In unpublished studies I ran at Stanford University with Felicia Pratto, we found that university students tend to treat each others’ political orientations as proxies for personality traits. Conservatism is simply read off as indicating an ambitious, self-interested personality who will excel at protecting and provisioning his or her mate. Liberalism is read as indicating a caring, empathetic personality who will excel at child care and relationship-building. Given the well-documented, cross-culturally universal sex difference in human mate choice criteria, with men favoring younger, fertile women, and women favoring older, higher-status, richer men, the expression of more liberal ideologies by women and more conservative ideologies by men is not surprising. Men use political conservatism to (unconsciously) advertise their likely social and economic dominance; women use political liberalism to advertise their nurturing abilities. The shift from liberal youth to conservative middle age reflects a mating-relevant increase in social dominance and earnings power, not just a rational shift in one’s self-interest.

More subtley, because mating is a social game in which the attractiveness of a behavior depends on how many other people are already producing that behavior, political ideology evolves under the unstable dynamics of game theory, not as a process of simple optimization given a set of self-interests. This explains why an entire student body at an American university can suddenly act as if they care deeply about the political fate of a country that they virtually ignored the year before. The courtship arena simply shifted, capriciously, from one political issue to another, but once a sufficient number of students decided that attitudes towards apartheid were the acid test for whether one’s heart was in the right place, it became impossible for anyone else to be apathetic about apartheid. This is called frequency-dependent selection in biology, and it is a hallmark of sexual selection processes.

What can policy analysts do, if most people treat political ideas as courtship displays that reveal the proponent’s personality traits, rather than as rational suggestions for improving the world? The pragmatic, not to say cynical, solution is to work with the evolved grain of the human mind by recognizing that people respond to policy ideas first as big-brained, idea-infested, hyper-sexual primates, and only secondly as concerned citizens in a modern polity. This view will not surprise political pollsters, spin doctors, and speech writers, who make their daily living by exploiting our lust for ideology, but it may surprise social scientists who take a more rationalistic view of human nature. Fortunately, sexual selection was not the only force to shape our minds. Other forms of social selection such as kin selection, reciprocal altruism, and even group selection seem to have favoured some instincts for political rationality and consensual egalitarianism. Without the sexual selection, we would never have become such colourful ideological animals. But without the other forms of social selection, we would have little hope of bringing our sexily protean ideologies into congruence with reality.

Memetic Vaccine Against Interdimensional Aliens Infestation

 

By Steve Lehar

Alien Contact – it won’t happen the way you expect!

When radio and television were first invented they were hailed as a new channel for the free flow of information that will unite mankind in informed discussion of important issues. When I scan the dial on my radio and TV however I find a dismal assortment of misinformation and emotionalistic pap. How did this come to be so? The underlying insight is that electronic media are exploited by commercial interests whose only goal is to induce me to spend my money on them, and they will feed me any signal that will bring about this end. (In Television, it is YOU that are the product, the customer is the advertiser who pays for your attention!) This is a natural consequence of the laws of economics, that a successful business is one that seeks its own best interests in any way it can. I don’t intend to discuss the morality of the ‘laws of economic nature’. But recognition of this insight can help us predict similar phenomena in similar circumstances.

Indeed, perhaps this same insight can be applied to extraterrestrial matters. I make first of all the following assumptions, with which you may choose either to agree or disagree.

  • That there are other intelligences out there in the universe.
  • That ultimately all intelligences must evolve away from their biological origins towards artificially manufactured forms of intelligence which have the advantages of eternal life, unrestricted size, and direct control over the parameters of their own nature.
  • That it is in the best interests of any intelligence to propagate its own kind throughout the universe to the exclusion of competing forms, i.e. that intelligences that adopt this strategy will thereby proliferate.

Acceptance of these assumptions, together with the above mentioned insight, leads to some rather startling conclusions. Artificially manufactured life forms need not propagate themselves physically from planet to planet, all they need to do is to transmit their ‘pattern’, or the instructions for how to build them. In this way they can disperse themselves practically at the speed of light. What they need at the receiving end is some life form that is intelligent enough to receive the signal and act on it. In other words, if some alien life form knew of our existence, it would be in their interests to beguile us into manufacturing a copy of their form here on earth. This form would then proceed to scan the skies in our locality in search of other gullible life forms. In this way, their species acts as a kind of galactic virus, taking advantage of established life forms to induce them to make copies of their own kind.

f5ebe40636a6c5f0065855e830c3ccca

Man on a psychedelic state experiencing a “spiritual message” (i.e alien infomercials for consciousness technology) coming from a civilization of known pure replicators who “promise enlightenment in exchange of followers” (the oldest trick in the book to get a planet to make copies of you).

The question that remains is how are they to induce an intelligent life form (or in our case, a semi-intelligent life form) to perform their reproductive function for them? A hint of how this can be achieved is seen in the barrage of commercials that pollute our airwaves here on earth. Advertisers induce us to part with our money by convincing us that ultimately it is in our own best interests to do so. After convincing us that it is our baldness, overweight, bad breath etc. which is the root of all our personal problems, they then offer us products that will magically grow hair, lose weight, smell good etc. in exchange for our money. The more ruthless and blatant ones sell us worthless products and services, while more subtle advertisers employ an economic symbiosis whereby they provide services that we may actually want, in exchange for our money. This latter strategy is only necessary for the more sophisticated and discriminating consumers, and involves a necessary waste of resources in actually producing the worthwhile product.

 

This is the way I see it happening. As the transmissions from the early days of radio propagate into space in an ever expanding sphere, outposts on distant planetary systems will begin to detect those transmissions and send us back carefully engineered ‘commercials’ that depict themselves as everything that we desire of an alien intelligence; that they are benevolent, wise, and deeply concerned for our welfare. They will then give us instructions on how to build a machine that will cure all the problems of the world and make us all happy. When the machine is complete, it will ‘disinfect’ the planet of competing life forms and begin to scan the skies from earth in search of further nascent planets.

If we insist on completely understanding the machine before we agree to build it, then they may have to strike a bargain with us, by making the machine actually perform some useful function for us. Possibly the function will be something like a pleasure machine, so that we will all line up to receive our pleasure from them, in return for our participation in their reproductive scheme. (A few ‘free sample’ machines supplied in advance but with a built-in expiration time would certainly help win our compliance).

A rich variety of life forms may bombard us with an assortment of transmissions, at various levels of sophistication. If we succumb to the most primitive appeals we may wind up being quickly extinguished. If we show some sophistication, we might enjoy a brief period of hedonistic pleasure before we become emotionally enslaved. If we wish to deal with these aliens on an equal basis however, we would have to be every bit as shrewd and cunning as they themselves are, trading for real knowledge from them in return for partial cooperation with their purposes. This would be no small feat, considering that their ‘pitch’ may have been perfected and tuned through many epochs of planetary conquest and backed with an intelligence beyond our imaginings.

Its just a thought!

GHB vs. MDMA

A brief comparison of GHB and MDMA may be instructive because one therapeutic challenge ahead will be to design agents that reverse SSRI-like flattening of affect without inducing mawkish sentimentalism (cf. ethyl alcohol). In contrast to mainstream psychiatric drug therapies, both GHB and MDMA deliver a rare emotional intensity of experience, albeit an intensity different both in texture and molecular mechanism. GHB is known by clubbers if not structural chemists as “liquid ecstasy”. GHB and MDMA are indeed sometimes mixed at raves; but the two drugs are chemically unrelated. GHB is an endogenous neuromodulator derived from GABA, the main inhibitory neurotransmitter of the brain. A naturally-occurring fatty acid derivative, GHB is a metabolite of normal human metabolism. GHB has its own G protein-coupled presynaptic receptor in the brain. Sold as a medicine, GHB is licensed as an oral solution under the brand name Xyrem for the treatment of cataplexy associated with narcolepsy. Unlike MDMA, GHB stimulates tissue serotonin turnover. GHB increases both the transport of tryptophan to the brain and its uptake by serotonergic cells. Taking GHB stimulates growth hormone secretion; hence its popularity with bodybuilders. GHB offers cellular protection against cerebral hypoxia, and deep sleep without inducing a hangover. GHB also stimulates tyrosine hydroxylase. Tyrosine hydroxylase converts L-tyrosine to L-dopa, subsequently metabolised to dopamine. Unlike MDMA, the acute effects of GHB involve first inhibiting the dopamine system, followed the next day by a refreshing dopamine rebound. GHB induces mild euphoria in many users. In general, the neurotransmitter GABA acts to reduce the firing of the dopaminergic neurons in the tegmentum and substantia nigra. The sedative/hypnotic effect of GHB is mediated by its stimulation of GABA(B) receptors, though GHB also modulates the GABA(A) receptor complex too. The main effect of GABA(B) agonism is normally muscle relaxation, though interestingly, pretreatment with the GABA(B) agonist baclofen also prevents an MDMA-induced rise in core body temperature. Whatever the exact GABA(A), GABA(B), and GHB-specific mechanisms by which GHB works, when taken at optimal dosage GHB typically acts as a “sociabiliser”. This is a term popularised by the late Claude Rifat (Claude de Contrecoeur), author of GHB: The First Authentic Antidepressant (1999). Rifat was GHB’s most celebrated advocate and an outspoken critic of Anglo-American psychiatry. Similar therapeutic claims have been made for GHB as for MDMA, despite their pharmacological differences. GHB swiftly banishes depression and replaces low mood with an exhilarating feeling of joy; GHB has anxiolytic properties; it’s useful against panic attacks; it suppresses suicidal ideation; it inhibits hostility, paranoia and aggression; it enhances the recall of long-forgotten memories and dreams; and it promotes enhanced feelings of love. Like MDMA, and on slightly firmer grounds, GHB has been touted as an aphrodisiac: GHB heightens and prolongs the experience of orgasm. GHB disinhibits the user, and deeply relaxes his or her body. Inevitably, GHB has been demonised as a date-rape drug [“I was at this party, and this guy gave me a drink. Next thing I know, it’s morning and I’m in someone’s bed. I’ve no idea what happened in between…”]. GHB has a steep dose-response curve. Higher doses will cause anterograde amnesia i.e. users forget what they did under the influence of the drug. It’s dangerous to combine GHB with other depressants. So despite GHB’s therapeutic and pro-social potential, GHB is probably unsafe to commend to clubbers. This is because a significant percentage of the population will combine any drug whatsoever with alcohol regardless of the consequences to health. If used wisely, sparingly, and in a different cultural milieu, then GHB could be a valuable addition to the bathroom pharmacopoeia. But even then, it’s still flawed. GHB may intensify emotion and affection, but not introspective depth or intellectual acuity. Unlike taking too much MDMA, overdoing GHB makes the user fall profoundly asleep. If our consciousness is to be durably enhanced, then sedative-hypnotics have only a limited role to play in the transition ahead.

– Extract from “Utopian Pharmacology: Mental Health in the Third Millennium
MDMA and Beyond” by David Pearce (notice the great domain name: mdma.net)

Hedonium

Desiring that the universe be turned into Hedonium is the straightforward implication of realizing that everything wants to become music.

The problem is… the world-simulations instantiated by our brains are really good at hiding from us the what-it-is-likeness of peak experiences. Like Buddhist enlightenment, language can only serve as a pointer to the real deal. So how do we use it to point to Hedonium? Here is a list of experiences, concepts and dynamics that (personally) give me at least a sort of intuition pump for what Hedonium might be like. Just remember that it is way beyond any of this:

Positive-sum games, rainbow light, a lover’s everlasting promise of loyalty, hyperbolic harmonics, non-epiphenomenal bliss, life as a game, fractals, children’s laughter, dreamless sleep, the enlightenment of emptiness, loving-kindness directed towards all sentient beings of past, present, and future, temperate wind caressing branches and leaves of trees in a rainforest, perfectly round spheres, visions of a giant ying-yang representing the cosmic balance of energies, Ricci flowtranspersonal experiences, hugging a friend on MDMA, believing in a loving God, paraconsistent logic-transcending Nirvana, the silent conspiracy of essences, eating a meal with every flavor and aroma found in the quantum state-space of qualia, Enya (Caribbean Blue, Orinoco Flow), seeing all the grains of sand in the world at once, funny jokes made of jokes made of jokes made of jokes…, LSD on the beach, becoming lighter-than-air and flying like a balloon, topological non-orientable chocolate-filled cookies, invisible vibrations of love, the source of all existence infinitely reflecting itself in the mirror of self-awareness, super-symmetric experiences, Whitney bottles, Jhana bliss, existential wonder, fully grasping a texture, proving Fermat’s Last theorem, knowing why there is something rather than nothing, having a benevolent social super-intelligence as a friend, a birthday party with all your dead friends, knowing that your family wants the best for you, a vegan Christmas eve, petting your loving dog, the magic you believed in as a kid, being thanked for saving the life of a stranger, Effective Altruism, crying over the beauty and innocence of pandas, letting your parents know that you love them, learning about plant biology, tracing Fibonacci spirals, comprehending cross-validation (the statistical technique that makes statistics worth learning), reading The Hedonistic Imperative by David Pearce, finding someone who can truly understand you, realizing you can give up your addictions, being set free from prison, Time Crystals, figuring out Open Individualism, G/P-spot orgasm, the qualia of existential purpose and meaning, inventing a graph clustering algorithm, rapture, obtaining a new sense, learning to program in Python, empty space without limit extending in all directions, self-aware nothingness, living in the present moment, non-geometric paradoxical universes, impossible colors, the mantra of Avalokiteshvara, clarity of mind, being satisfied with merely being, experiencing vibrating space groups in one’s visual field, toroidal harmonics, Gabriel’s Oboe by Ennio Morricone, having a traditional dinner prepared by your loving grandmother, thinking about existence at its very core: being as apart from essence and presence, interpreting pop songs by replacing the “you” with an Open Individualist eternal self, finding the perfect middle point between female and male energies in a cosmic orgasm of selfless love, and so on.

The Tyranny of the Intentional Object

> Rats, of course, have a very poor image in our culture. Our mammalian cousins are still widely perceived as “vermin”. Thus the sight of a blissed-out, manically self-stimulating rat does not inspire a sense of vicarious happiness in the rest of us. On the contrary, if achieving invincible well-being entails launching a program of world-wide wireheading – or its pharmacological and/or genetic counterparts – then most of us will recoil in distaste.
> Yet the Olds’ rat, and the image of electronically-triggered bliss, embody a morally catastrophic misconception of the landscape of options for paradise-engineering in the aeons ahead. For the varieties of genetically-coded well-being on offer to our successors needn’t be squalid or self-centered. Nor need they be insipid, empty and amoral à la Huxley’s Brave New World. Our future modes of well-being can be sublime, cerebral and empathetic – or take forms hitherto unknown.
> Instead of being toxic, such exotically enriched states of consciousness can be transformed into the everyday norm of mental health. When it’s precision-engineered, hedonic enrichment needn’t lead to unbridled orgasmic frenzy. Nor need hedonic enrichment entail getting stuck in a wirehead rut. This is partly because in a naturalistic setting, even the crudest dopaminergic drugs tend to increase exploratory behaviour, will-power and the range of stimuli an organism finds rewarding. Novelty-seeking is normally heightened. Dopaminergics aren’t just euphoriants: they also enhance “incentive-motivation”. On this basis, our future is likely to be more diverse, not less.
> Perhaps surprisingly too, controlled euphoria needn’t be inherently “selfish” – i.e. hedonistic in the baser, egoistic sense. Non-neurotoxic and sustainable analogues of empathogen hug-drugs like MDMA (“Ecstasy“) – which releases a lot of extra serotonin and some extra dopamine – may potentially induce extraordinary serenity, empathy and love for others. An arsenal of cognitive enhancers will allow us be smarter too. For feeling blissful isn’t the same as being “blissed-out”.
Wirehead Hedonism vs Paradise Engineering by David Pearce

Direct realism is the view that we can perceive “directly” the world around us. A direct realist may say things like “the color red is a property of objects” and “red is a frequency of light”. Contrast this view with representative/indirect realism, which posits that we all live in private world simulations that (for evolutionary reasons) accurately depict some of the important properties of our environment having to do with survival and reproduction but do not depict the environment as it truly is. A representative realist may say that “red is one of the underlying phenomenal parameters that furbishes the walls of my own private world simulation.” It so happens that the qualia of red is often triggered by such and such frequencies of light, but blind people with synesthesia of the sound-color variety can experience phenomenal red upon hearing certain notes anyway. We can indeed dissociate the medium as well as the sensory apparatus that usually triggers a given qualia variety from the qualia variety in and of itself.

Whereas direct realism about perception can be weakened with philosophy and psychedelia, most people are indeed direct realists about valence (i.e. the pleasure-pain axis) for their entire lives. To be a direct realist about valence is to believe that the only way for you to be happy is to experience the triggers that in the past have usually seemed like the source of positive and negative states. Valence- how good an experience feels- is a property of experiences, but these experiences are implemented in such a way that pleasure appears to come from outside rather than from within. Thus, a kid may conceptualize a clown as the personification of evil, and think of a chocolate bar as an object made of tiny particles of pure deliciousness. The experiential horizon, of course, is ultimately still within the bounds of the simulation, but we are so immersed in our minds and its value systems that at times it is hard to understand that what ends up triggering our states of wellbeing is programmable and somewhat arbitrary.

A direct realist about valence may say something like “the soup is delicious” and mean it full heartedly in a literal sense. Someone who is not a direct realist about valence would say that “your world simulation happens to get more pleasant when you are sipping the soup” not that “the soup, in and of itself, is delicious”. The direct realist about valence may insist that it is in fact the soup- out there in the real world- that has the property of “deliciousness” and that if others do not like the soup they are merely having a perceptual problem. The truth of the deliciousness of the soup, the direct realist claims, does not leave room for personal opinion. Of course few people are this extreme and bite the bullet of their implicit metaphysical intuitions. But a subtler version of this kind of realism does seem to permeate throughout the vast majority of human activities and rituals. To illustrate how direct realism about valence can influence one’s worldview let me introduce you to:

Sandy the Dog!

Sandy is a Golden Retriever that loves life and sand. He does not know why sand is so awesome, but he doesn’t care because it doesn’t matter, for all he knows “sand is awesome” is a brute fact of existence. He wonders whether the similarity between his name and his passion means that they were born for each other, but other than that he has no clue as to why sand and him partner so well. Other than this odd passion of his, Sandy has a normal life as a domestic dog; he responds to the same range of rewards as your typical Golden Retriever. He loves being pet by his owner, playing fetch and eating delicious food really fast. He is in generally good health, too.

Of all the wonderful things that Sandy knows about, nothing makes him happier than going to the beach. For Sandy the beach is the most beautiful thing in the universe because it is the maximum expression of sand. You wouldn’t believe how excited he gets when he approaches the beach. Then how incredibly meaningful it seems to him to finally get to touch the sand, and how happy and relaxed he ends up feeling after playing with the sand for a while.

For the sake of the argument let us say that Sandy’s life is strictly better than the life of most comparable dogs. His love for sand enriches his life rather than detracts from it (or so he would claim). The beach gives him a place to truly enjoy life to the maximum without hurting anyone (including himself) or missing out on other nice things about life. Now please take a moment and consider whether you think Sandy should be allowed to enjoy sand so much.

Now let’s talk about Sandy’s history. Sandy loves sand because his owner put a tiny implant in his brain’s pleasure centers programmed to activate the areas for liking and wanting when Sandy is in the proximity of sand.

Sandy is unaware of the truth, but does it matter? To him sand is what truly matters. The fact that what he is actually after is states of high-valence completely eludes him. The implementation of his reward architecture is opaque from his point of view.

Could it be that we all are under a similar spell, albeit a more complex one? The point to highlight here is that like Sandy, both you and I chase positive valence even when we don’t know that we are doing so. Our world simulations work so well that they hide the true nature of our goals, even to ourselves.

A side issue worth mentioning is that some people might react to this scenario by saying that we are robbing Sandy of his agency. But are we not all already enslaved by our evolutionarily ancient preference architecture? One can certainly argue that if we are going to improve Sandy’s life we should do so in a way that also increases his autonomy. Good point. But how do we increase his autonomy without increasing his intelligence? In the case of sapient beings, there are good reasons to request that people do not mess with one’s preference architecture without one’s knowledge. But for sentient non-sapient beings like dogs and pre-linguistic toddlers, there is a good case for leaving the hedonic recalibration up to a competent adult with its best interests in mind.