Mental Health as an EA Cause: Key Questions

Michale Johnosn and I will be hanging out at the EA Global (SF) 2017 conference this weekend representing the Qualia Research Institute. If you see us and want to chat, please feel free to approach us. This is what we look like:


At EAGlobal 2016 at Berkeley

I will be handing out the following flyer:

Mental Health as an EA Cause Area: Key Questions

  1. What makes a state of consciousness feel good or bad?
  2. What percentage of worldwide suffering is directly caused by mental illness and/or the hedonic treadmill rather than by external circumstances?
  3. Is there a way to “sabotage the hedonic treadmill”?
  4. Can benevolent and intelligent sentient beings be fully animated by gradients of bliss (offloading nociception to insentient mechanism)?
  5. Can we uproot the fundamental causes of suffering by tweaking our brain structure without compromising our critical thinking?
  6. Can consciousness technologies play a part in making the world a high-trust super-organism?


Wallpaper symmetry chart with 5 different notations (slightly different diagram in handout)

If these questions intrigue you, you are likely to find the following readings valuable:

  1. Principia Qualia
  2. Qualia Computing So Far
  3. Quantifying Bliss: Talk Summary
  4. The Tyranny of the Intentional Object
  5. Algorithmic Reduction of Psychedelic States
  6. How to secretly communicate with people on LSD
  7. ELI5 “The Hyperbolic Geometry of DMT Experiences”
  8. Peaceful Qualia: The Manhattan Project of Consciousness
  9. Symmetry Theory of Valence “Explain Like I’m 5” edition
  10. Generalized Wada Test and the Total Order of Consciousness
  11. Wireheading Done Right: Stay Positive Without Going Insane
  12. Why we seek out pleasure: the Symmetry Theory of Homeostatic Regulation
  13. The Hyperbolic Geometry of DMT Experiences: Symmetries, Sheets, and Saddled Scenes

Who we are:
Qualia Research Institute (Michael Johnson & Andrés Gómez Emilsson)
Qualia Computing (this website; Andrés Gómez Emilsson)
Open Theory (Michael Johnson)

Printable version:


24 Predictions for the Year 3000 by David Pearce

In response to the Quora question Looking 1000 years into the future and assuming the human race is doing well, what will society be like?, David Pearce wrote:

The history of futurology to date makes sobering reading. Prophecies tend to reveal more about the emotional and intellectual limitations of the author than the future. […]
But here goes…

Year 3000

1) Superhuman bliss.

Mastery of our reward circuitry promises a future of superhuman bliss – gradients of genetically engineered well-being orders of magnitude richer than today’s “peak experiences”.

2) Eternal youth.

More strictly, indefinitely extended youth and effectively unlimited lifespans. Transhumans, humans and their nonhuman animal companions don’t grow old and perish. Automated off-world backups allow restoration and “respawning” in case of catastrophic accidents. “Aging” exists only in the medical archives.
SENS Research Foundation – Wikipedia

3) Full-spectrum superintelligences.

A flourishing ecology of sentient nonbiological quantum computers, hyperintelligent digital zombies and full-spectrum transhuman “cyborgs” has radiated across the Solar System. Neurochipping makes superintelligence all-pervasive. The universe seems inherently friendly: ubiquitous AI underpins the illusion that reality conspires to help us.
Superintelligence: Paths, Dangers, Strategies – Wikipedia
Artificial Intelligence @ MIRI
Kurzweil Accelerating Intelligence

4) Immersive VR.

“Magic” rules. “Augmented reality” of earlier centuries has been largely superseded by hyperreal virtual worlds with laws, dimensions, avatars and narrative structures wildly different from ancestral consensus reality. Selection pressure in the basement makes complete escape into virtual paradises infeasible. For the most part, infrastructure maintenance in basement reality has been delegated to zombie AI.
Augmented reality – Wikipedia
Virtual reality – Wikipedia

5) Transhuman psychedelia / novel state spaces of consciousness.

Analogues of cognition, volition and emotion as conceived by humans have been selectively retained, though with a richer phenomenology than our thin logico-linguistic thought. Other fundamental categories of mind have been discovered via genetic tinkering and pharmacological experiment. Such novel faculties are intelligently harnessed in the transhuman CNS. However, the ordinary waking consciousness of Darwinian life has been replaced by state-spaces of mind physiologically inconceivable to Homo sapiens. Gene-editing tools have opened up modes of consciousness that make the weirdest human DMT trip akin to watching paint dry. These disparate states-spaces of consciousness do share one property: they are generically blissful. “Bad trips” as undergone by human psychonauts are physically impossible because in the year 3000 the molecular signature of experience below “hedonic zero” is missing.
Qualia Computing

6) Supersentience / ultra-high intensity experience.

The intensity of everyday experience surpasses today’s human imagination. Size doesn’t matter to digital data-processing, but bigger brains with reprogrammed, net-enabled neurons and richer synaptic connectivity can exceed the maximum sentience of small, simple, solipsistic mind-brains shackled by the constraints of the human birth-canal. The theoretical upper limits to phenomenally bound mega-minds, and the ultimate intensity of experience, remain unclear. Intuitively, humans have a dimmer-switch model of consciousness – with e.g. ants and worms subsisting with minimal consciousness and humans at the pinnacle of the Great Chain of Being. Yet Darwinian humans may resemble sleepwalkers compared to our fourth-millennium successors. Today we say we’re “awake”, but mankind doesn’t understand what “posthuman intensity of experience” really means.
What earthly animal comes closest to human levels of sentience?

7) Reversible mind-melding.

Early in the twenty-first century, perhaps the only people who know what it’s like even partially to share a mind are the conjoined Hogan sisters. Tatiana and Krista Hogan share a thalamic bridge. Even mirror-touch synaesthetes can’t literally experience the pains and pleasures of other sentient beings. But in the year 3000, cross-species mind-melding technologies – for instance, sophisticated analogues of reversible thalamic bridges – and digital analogs of telepathy have led to a revolution in both ethics and decision-theoretic rationality.
Could Conjoined Twins Share a Mind?
Mirror-touch synesthesia – Wikipedia
Ecstasy : Utopian Pharmacology

8) The Anti-Speciesist Revolution / worldwide veganism/invitrotarianism.

Factory-farms, slaughterhouses and other Darwinian crimes against sentience have passed into the dustbin of history. Omnipresent AI cares for the vulnerable via “high-tech Jainism”. The Anti-Speciesist Revolution has made arbitrary prejudice against other sentient beings on grounds of species membership as perversely unthinkable as discrimination on grounds of ethnic group. Sentience is valued more than sapience, the prerogative of classical digital zombies (“robots”).
What is High-tech Jainism?
The Antispeciesist Revolution
‘Speciesism: Why It Is Wrong and the Implications of Rejecting It’

9) Programmable biospheres.

Sentient beings help rather than harm each other. The successors of today’s primitive CRISPR genome-editing and synthetic gene drive technologies have reworked the global ecosystem. Darwinian life was nasty, brutish and short. Extreme violence and useless suffering were endemic. In the year 3000, fertility regulation via cross-species immunocontraception has replaced predation, starvation and disease to regulate ecologically sustainable population sizes in utopian “wildlife parks”. The free-living descendants of “charismatic mega-fauna” graze happily with neo-dinosaurs, self-replicating nanobots, and newly minted exotica in surreal garden of edens. Every cubic metre of the biosphere is accessible to benign supervision – “nanny AI” for humble minds who haven’t been neurochipped for superintelligence. Other idyllic biospheres in the Solar System have been programmed from scratch.
CRISPR – Wikipedia
Genetically designing a happy biosphere
Our Biotech Future

10) The formalism of the TOE is known.
(details omitteddoes Quora support LaTeX?)

Dirac recognised the superposition principle as the fundamental principle of quantum mechanics. Wavefunction monists believe the superposition principle holds the key to reality itself. However – barring the epoch-making discovery of a cosmic Rosetta stone – the implications of some of the more interesting solutions of the master equation for subjective experience are still unknown.
Theory of everything – Wikipedia
M-theory – Wikipedia
Why does the universe exist? Why is there something rather than nothing? The Wave Function: Essays on the Metaphysics of Quantum Mechanics (9780199790548): Alyssa Ney, David Z Albert: Books

11) The Hard Problem of consciousness is solved.

The Hard Problem of consciousness was long reckoned insoluble. The Standard Model in physics from which (almost) all else springs was a bit of a mess but stunningly empirically successful at sub-Planckian energy regimes. How could physicalism and the ontological unity of science be reconciled with the existence, classically impossible binding, causal-functional efficacy and diverse palette of phenomenal experience? Mankind’s best theory of the world was inconsistent with one’s own existence, a significant shortcoming. However, all classical- and quantum-mind conjectures with predictive power had been empirically falsified by 3000 – with one exception.
Physicalism – Wikipedia
Quantum Darwinism – Wikipedia
Consciousness (Stanford Encyclopedia of Philosophy)
Hard problem of consciousness – Wikipedia
Integrated information theory – Wikipedia
Principia Qualia
Dualism – Wikipedia
New mysterianism – Wikipedia
Quantum mind – Wikipedia

[Which theory is most promising? As with the TOE, you’ll forgive me for skipping the details. In any case, my ideas are probably too idiosyncratic to be of wider interest, but for anyone curious: What is the Quantum Mind?]

12) The Meaning of Life resolved.

Everyday life is charged with a profound sense of meaning and significance. Everyone feels valuable and valued. Contrast the way twenty-first century depressives typically found life empty, absurd or meaningless; and how even “healthy” normals were sometimes racked by existential angst. Or conversely, compare how people with bipolar disorder experienced megalomania and messianic delusions when uncontrollably manic. Hyperthymic civilization in the year 3000 records no such pathologies of mind or deficits in meaning. Genetically preprogrammed gradients of invincible bliss ensure that all sentient beings find life self-intimatingly valuable. Transhumans love themselves, love life, and love each other.

13) Beautiful new emotions.

Nasty human emotions have been retired – with or without the recruitment of functional analogs to play their former computational role. Novel emotions have been biologically synthesised and their “raw feels” encephalised and integrated into the CNS. All emotion is beautiful. The pleasure axis has replaced the pleasure-pain axis as the engine of civilised life.
An information-theoretic perspective on life in Heaven

14) Effectively unlimited material abundance / molecular nanotechnology.

Status goods long persisted in basement reality, as did relics of the cash nexus on the blockchain. Yet in a world where both computational resources and the substrates of pure bliss aren’t rationed, such ugly evolutionary hangovers first withered, then died.
Blockchain – Wikipedia

15) Posthuman aesthetics / superhuman beauty.

The molecular signatures of aesthetic experience have been identified, purified and overexpressed. Life is saturated with superhuman beauty. What passed for “Great Art” in the Darwinian era is no more impressive than year 2000 humans might judge, say, a child’s painting by numbers or Paleolithic daubings and early caveporn. Nonetheless, critical discernment is retained. Transhumans are blissful but not “blissed out” – or not all of them at any rate.
Art – Wikipedia

16) Gender transformation.

Like gills or a tail, “gender” in the human sense is a thing of the past. We might call some transhuman minds hyper-masculine (the “ultrahigh AQ” hyper-systematisers), others hyperfeminine (“ultralow AQ” hyper-empathisers), but transhuman cognitive styles transcend such crude dichotomies, and can be shifted almost at will via embedded AI. Many transhumans are asexual, others pan-sexual, a few hypersexual, others just sexually inquisitive. “The degree and kind of a man’s sexuality reach up into the ultimate pinnacle of his spirit”, said Nietzsche – which leads to (17).

Object Sexuality – Wikipedia
Empathizing & Systematizing Theory – Wikipedia

17) Physical superhealth.

In 3000, everyone feels physically and psychologically “better than well”. Darwinian pathologies of the flesh such as fatigue, the “leaden paralysis” of chronic depressives, and bodily malaise of any kind are inconceivable. The (comparatively) benign “low pain” alleles of the SCN9A gene that replaced their nastier ancestral cousins have been superseded by AI-based nociception with optional manual overrides. Multi-sensory bodily “superpowers” are the norm. Everyone loves their body-images in virtual and basement reality alike. Morphological freedom is effectively unbounded. Awesome robolovers, nights of superhuman sensual passion, 48-hour whole-body orgasms, and sexual practices that might raise eyebrows among prudish Darwinians have multiplied. Yet life isn’t a perpetual orgy. Academic subcultures pursue analogues of Mill’s “higher pleasures”. Paradise engineering has become a rigorous discipline. That said, a lot of transhumans are hedonists who essentially want to have superhuman fun. And why not?

18) World government.

Routine policy decisions in basement reality have been offloaded to ultra-intelligent zombie AI. The quasi-psychopathic relationships of Darwinian life – not least the zero-sum primate status-games of the African savannah – are ancient history. Some conflict-resolution procedures previously off-loaded to AI have been superseded by diplomatic “mind-melds”. In the words of Henry Wadsworth Longfellow, “If we could read the secret history of our enemies, we should find in each man’s life sorrow and suffering enough to disarm all hostility.” Our descendants have windows into each other’s souls, so to speak.

19) Historical amnesia.

The world’s last experience below “hedonic zero” marked a major evolutionary transition in the evolutionary development of life. In 3000, the nature of sub-zero states below Sidgwick’s “natural watershed” isn’t understood except by analogy: some kind of phase transition in consciousness below life’s lowest hedonic floor – a hedonic floor that is being genetically ratcheted upwards as life becomes ever more wonderful. Transhumans are hyper-empathetic. They get off on each other’s joys. Yet paradoxically, transhuman mental superhealth depends on biological immunity to true comprehension of the nasty stuff elsewhere in the universal wavefunction that even mature superintelligence is impotent to change. Maybe the nature of e.g. Darwinian life, and the minds of malaise-ridden primitives in inaccessible Everett branches, doesn’t seem any more interesting than we find books on the Dark Ages. Negative utilitarianism, if it were conceivable, might be viewed as a depressive psychosis. “Life is suffering”, said Gautama Buddha, but fourth millennials feel in the roots of their being that Life is bliss.
Invincible ignorance? Perhaps.
Negative Utilitarianism – Wikipedia

20) Super-spirituality.

A tough one to predict. But neuroscience can soon identify the molecular signatures of spiritual experience, refine them, and massively amplify their molecular substrates. Perhaps some fourth millennials enjoy lifelong spiritual ecstasies beyond the mystical epiphanies of temporal-lobe epileptics. Secular rationalists don’t know what we’re missing.

21) The Reproductive Revolution.
Reproduction is uncommon in a post-aging society. Most transhumans originate as extra-uterine “designer babies”. The reckless genetic experimentation of sexual reproduction had long seemed irresponsible. Old habits still died hard. By year 3000, the genetic crapshoot of Darwinian life has finally been replaced by precision-engineered sentience. Early critics of “eugenics” and a “Brave New World” have discovered by experience that a “triple S” civilisation of superhappiness, superlongevity and superintelligence isn’t as bad as they supposed.

22) Globish (“English Plus”).

Automated real-time translation has been superseded by a common tongue – Globish – spoken, written or “telepathically” communicated. Partial translation manuals for mutually alien state-spaces of consciousness exist, but – as twentieth century Kuhnians would have put it – such state-spaces tend to be incommensurable and their concepts state-specific. Compare how poorly lucid dreamers can communicate with “awake” humans. Many Darwinian terms and concepts are effectively obsolete. In their place, active transhumanist vocabularies of millions of words are common. “Basic Globish” is used for communication with humble minds, i.e. human and nonhuman animals who haven’t been fully uplifted.
Incommensurability – SEoP
Uplift (science_fiction) – Wikipedia

23) Plans for Galactic colonization.

Terraforming and 3D-bioprinting of post-Darwinian life on nearby solar systems is proceeding apace. Vacant ecological niches tend to get filled. In earlier centuries, a synthesis of cryonics, crude reward pathway enhancements and immersive VR software, combined with revolutionary breakthroughs in rocket propulsion, led to the launch of primitive manned starships. Several are still starbound. Some transhuman utilitarian ethicists and policy-makers favour creating a utilitronium shockwave beyond the pale of civilisation to convert matter and energy into pure pleasure. Year 3000 bioconservatives focus on promoting life animated by gradients of superintelligent bliss. Yet no one objects to pure “hedonium” replacing unprogrammed matter.
Interstellar Travel – Wikipedia
Utilitarianism – Wikipedia

24) The momentous “unknown unknown”.

If you read a text and the author’s last words are “and then I woke up”, everything you’ve read must be interpreted in a new light – semantic holism with a vengeance. By the year 3000, some earth-shattering revelation may have changed everything – some fundamental background assumption of earlier centuries has been overturned that might not have been explicitly represented in our conceptual scheme. If it exists, then I’ve no inkling what this “unknown unknown” might be, unless it lies hidden in the untapped subjective properties of matter and energy. Christian readers might interject “The Second Coming”. Learning the Simulation Hypothesis were true would be a secular example of such a revelation. Some believers in an AI “Intelligence Explosion” speak delphically of “The Singularity”. Whatever – Shakespeare made the point more poetically, “There are more things in heaven and earth, Horatio, Than are dreamt of in your philosophy”.

As it stands, yes, (24) is almost vacuous. Yet compare how the philosophers of classical antiquity who came closest to recognising their predicament weren’t intellectual titans like Plato or Aristotle, but instead the radical sceptics. The sceptics guessed they were ignorant in ways that transcended the capacity of their conceptual scheme to articulate. By the lights of the fourth millennium, what I’m writing, and what you’re reading, may be stultified by something that humans don’t know and can’t express.
Ancient Skepticism – SEoP


OK, twenty-four predictions! Successful prophets tend to locate salvation or doom within the credible lifetime of their intended audience. The questioner asks about life in the year 3000 rather than, say, a Kurzweilian 2045. In my view, everyone reading this text will grow old and die before the predictions of this answer are realised or confounded – with one possible complication.

Opt-out cryonics and opt-in cryothanasia are feasible long before the conquest of aging. Visiting grandpa in the cryonics facility can turn death into an event in life. I’m not convinced that posthuman superintelligence will reckon that Darwinian malware should be revived in any shape or form. Yet if you want to wake up one morning in posthuman paradise – and I do see the appeal – then options exist:

p.s. I’m curious about the credence (if any) the reader would assign to the scenarios listed here.

Why I think the Foundational Research Institute should rethink its approach

by Mike Johnson

The following is my considered evaluation of the Foundational Research Institute, circa July 2017. I discuss its goal, where I foresee things going wrong with how it defines suffering, and what it could do to avoid these problems.

TL;DR version: functionalism (“consciousness is the sum-total of the functional properties of our brains”) sounds a lot better than it actually turns out to be in practice. In particular, functionalism makes it impossible to define ethics & suffering in a way that can mediate disagreements.

I. What is the Foundational Research Institute?

The Foundational Research Institute (FRI) is a Berlin-based group that “conducts research on how to best reduce the suffering of sentient beings in the near and far future.” Executive Director Max Daniel introduced them at EA Global Boston as “the only EA organization which at an organizational level has the mission of focusing on reducing s-risk.” S-risks are, according to Daniel, “risks where an adverse outcome would bring about suffering on an astronomical scale, vastly exceeding all suffering that has existed on Earth so far.”

Essentially, FRI wants to become the research arm of suffering-focused ethics, and help prevent artificial general intelligence (AGI) failure-modes which might produce suffering on a cosmic scale.

What I like about FRI:

While I have serious qualms about FRI’s research framework, I think the people behind FRI deserve a lot of credit- they seem to be serious people, working hard to build something good. In particular, I want to give them a shoutout for three things:

  • First, FRI takes suffering seriously, and I think that’s important. When times are good, we tend to forget how tongue-chewingly horrific suffering can be. S-risks seem particularly horrifying.
  • Second, FRI isn’t afraid of being weird. FRI has been working on s-risk research for a few years now, and if people are starting to come around to the idea that s-risks are worth thinking about, much of the credit goes to FRI.
  • Third, I have great personal respect for Brian Tomasik, one of FRI’s co-founders. I’ve found him highly thoughtful, generous in debates, and unfailingly principled. In particular, he’s always willing to bite the bullet and work ideas out to their logical end, even if it involves repugnant conclusions.

What is FRI’s research framework?

FRI believes in analytic functionalism, or what David Chalmers calls “Type-A materialism”. Essentially, what this means is there’s no ’theoretical essence’ to consciousness; rather, consciousness is the sum-total of the functional properties of our brains. Since ‘functional properties’ are rather vague, this means consciousness itself is rather vague, in the same way words like “life,” “justice,” and “virtue” are messy and vague.

Brian suggests that this vagueness means there’s an inherently subjective, perhaps arbitrary element to how we define consciousness:

Analytic functionalism looks for functional processes in the brain that roughly capture what we mean by words like “awareness”, “happy”, etc., in a similar way as a biologist may look for precise properties of replicators that roughly capture what we mean by “life”. Just as there can be room for fuzziness about where exactly to draw the boundaries around “life”, different analytic functionalists may have different opinions about where to define the boundaries of “consciousness” and other mental states. This is why consciousness is “up to us to define”. There’s no hard problem of consciousness for the same reason there’s no hard problem of life: consciousness is just a high-level word that we use to refer to lots of detailed processes, and it doesn’t mean anything in addition to those processes.

Finally, Brian argues that the phenomenology of consciousness is identical with the phenomenology of computation:

I know that I’m conscious. I also know, from neuroscience combined with Occam’s razor, that my consciousness consists only of material operations in my brain — probably mostly patterns of neuronal firing that help process inputs, compute intermediate ideas, and produce behavioral outputs. Thus, I can see that consciousness is just the first-person view of certain kinds of computations — as Eliezer Yudkowsky puts it, “How An Algorithm Feels From Inside“. Consciousness is not something separate from or epiphenomenal to these computations. It is these computations, just from their own perspective of trying to think about themselves.


In other words, consciousness is what minds compute. Consciousness is the collection of input operations, intermediate processing, and output behaviors that an entity performs.

And if consciousness is all these things, so too is suffering. Which means suffering is computational, yet also inherently fuzzy, and at least a bit arbitrary; a leaky high-level reification impossible to speak about accurately, since there’s no formal, objective “ground truth”.

II. Why do I worry about FRI’s research framework?

In short, I think FRI has a worthy goal and good people, but its metaphysics actively prevent making progress toward that goal. The following describes why I think that, drawing heavily on Brian’s writings (of FRI’s researchers, Brian seems the most focused on metaphysics):

Note: FRI is not the only EA organization which holds functionalist views on consciousness; much of the following critique would also apply to e.g. MIRI, FHI, and OpenPhil. I focus on FRI because (1) Brian’s writings on consciousness & functionalism have been hugely influential in the community, and are clear enough *to* criticize; (2) the fact that FRI is particularly clear about what it cares about- suffering- allows a particularly clear critique about what problems it will run into with functionalism; (3) I believe FRI is at the forefront of an important cause area which has not crystallized yet, and I think it’s critically important to get these objections bouncing around this subcommunity.

Objection 1: Motte-and-bailey

Brian: “Consciousness is not a thing which exists ‘out there’ or even a separate property of matter; it’s a definitional category into which we classify minds. ‘Is this digital mind really conscious?’ is analogous to ‘Is a rock that people use to eat on really a table?’ [However,] That consciousness is a cluster in thingspace rather than a concrete property of the world does not make reducing suffering less important.”

The FRI model seems to imply that suffering is ineffable enough such that we can’t have an objective definition, yet sufficiently effable that we can coherently talk and care about it. This attempt to have it both ways seems contradictory, or at least in deep tension.

Indeed, I’d argue that the degree to which you can care about something is proportional to the degree to which you can define it objectively. E.g., If I say that “gnireffus” is literally the most terrible thing in the cosmos, that we should spread gnireffus-focused ethics, and that minimizing g-risks (far-future scenarios which involve large amounts of gnireffus) is a moral imperative, but also that what is and what and isn’t gnireffus is rather subjective with no privileged definition, and it’s impossible to objectively tell if a physical system exhibits gnireffus, you might raise any number of objections. This is not an exact metaphor for FRI’s position, but I worry that FRI’s work leans on the intuition that suffering is real and we can speak coherently about it, to a degree greater than its metaphysics formally allow.

Max Daniel (personal communication) suggests that we’re comfortable with a degree of ineffability in other contexts; “Brian claims that the concept of suffering shares the allegedly problematic properties with the concept of a table. But it seems a stretch to say that the alleged tension is problematic when talking about tables. So why would it be problematic when talking about suffering?” However, if we take the anti-realist view that suffering is ‘merely’ a node in the network of language, we have to live with the consequences of this: that ‘suffering’ will lose meaning as we take it away from the network in which it’s embedded (Wittgenstein). But FRI wants to do exactly this, to speak about suffering in the context of AGIs, simulated brains, even video game characters.

We can be anti-realists about suffering (suffering-is-a-node-in-the-network-of-language), or we can argue that we can talk coherently about suffering in novel contexts (AGIs, mind crime, aliens, and so on), but it seems inherently troublesome to claim we can do both at the same time.

Objection 2: Intuition duels

Two people can agree on FRI’s position that there is no objective fact of the matter about what suffering is (no privileged definition), but this also means they have no way of coming to any consensus on the object-level question of whether something can suffer. This isn’t just an academic point: Brian has written extensively about how he believes non-human animals can and do suffer extensively, whereas Yudkowsky (who holds computationalist views, like Brian) has written about how he’s confident that animals are not conscious and cannot suffer, due to their lack of higher-order reasoning.

And if functionalism is having trouble adjudicating the easy cases of suffering–whether monkeys can suffer, or whether dogs can— it doesn’t have a sliver of a chance at dealing with the upcoming hard cases of suffering: whether a given AGI is suffering, or engaging in mind crime; whether a whole-brain emulation (WBE) or synthetic organism or emergent intelligence that doesn’t have the capacity to tell us how it feels (or that we don’t have the capacity to understand) is suffering; if any aliens that we meet in the future can suffer; whether changing the internal architecture of our qualia reports means we’re also changing our qualia; and so on.

In short, FRI’s theory of consciousness isn’t actually a theory of consciousness at all, since it doesn’t do the thing we need a theory of consciousness to do: adjudicate disagreements in a principled way. Instead, it gives up any claim on the sorts of objective facts which could in principle adjudicate disagreements.

This is a source of friction in EA today, but it’s mitigated by the sense that

(1) The EA pie is growing, so it’s better to ignore disagreements than pick fights;

(2) Disagreements over the definition of suffering don’t really matter yet, since we haven’t gotten into the business of making morally-relevant synthetic beings (that we know of) that might be unable to vocalize their suffering.

If the perception of one or both of these conditions change, the lack of some disagreement-adjudicating theory of suffering will matter quite a lot.

Objection 3: Convergence requires common truth

Mike: “[W]hat makes one definition of consciousness better than another? How should we evaluate them?”

Brian: “Consilience among our feelings of empathy, principles of non-discrimination, understandings of cognitive science, etc. It’s similar to the question of what makes one definition of justice or virtue better than another.”

Brian is hoping that affective neuroscience will slowly converge to accurate views on suffering as more and better data about sentience and pain accumulates. But convergence to truth implies something (objective) driving the convergence- in this way, Brian’s framework still seems to require an objective truth of the matter, even though he disclaims most of the benefits of assuming this.

Objection 4: Assuming that consciousness is a reification produces more confusion, not less

Brian: “Consciousness is not a reified thing; it’s not a physical property of the universe that just exists intrinsically. Rather, instances of consciousness are algorithms that are implemented in specific steps. … Consciousness involves specific things that brains do.”

Brian argues that we treat conscious/phenomenology as more ‘real’ than it is. Traditionally, whenever we’ve discovered something is a leaky reification and shouldn’t be treated as ‘too real’, we’ve been able to break it down into more coherent constituent pieces we can treat as real. Life, for instance, wasn’t due to élan vital but a bundle of self-organizing properties & dynamics which generally co-occur. But carrying out this “de-reification” process on consciousness– enumerating its coherent constituent pieces– has proven difficult, especially if we want to preserve some way to speak cogently about suffering.

Speaking for myself, the more I stared into the depths of functionalism, the less certain everything about moral value became– and arguably, I see the same trajectory in Brian’s work and Luke Muehlhauser’s report. Their model uncertainty has seemingly become larger as they’ve looked into techniques for how to “de-reify” consciousness while preserving some flavor of moral value, not smaller. Brian and Luke seem to interpret this as evidence that moral value is intractably complicated, but this is also consistent with consciousness not being a reification, and instead being a real thing. Trying to “de-reify” something that’s not a reification will produce deep confusion, just as surely trying to treat a reification as ‘more real’ than it actually is will.

Edsger W. Dijkstra famously noted that “The purpose of abstraction is not to be vague, but to create a new semantic level in which one can be absolutely precise.” And so if our ways of talking about moral value fail to ‘carve reality at the joints’- then by all means let’s build better ones, rather than giving up on precision.

Objection 5: The Hard Problem of Consciousness is a red herring

Brian spends a lot of time discussing Chalmers’ “Hard Problem of Consciousness”, i.e. the question of why we’re subjectively conscious, and seems to base at least part of his conclusion on not finding this question compelling— he suggests “There’s no hard problem of consciousness for the same reason there’s no hard problem of life: consciousness is just a high-level word that we use to refer to lots of detailed processes, and it doesn’t mean anything in addition to those processes.” I.e., no ‘why’ is necessary; when we take consciousness and subtract out the details of the brain, we’re left with an empty set.

But I think the “Hard Problem” isn’t helpful as a contrastive centerpiece, since it’s unclear what the problem is, and whether it’s analytic or empirical, a statement about cognition or about physics. At the Qualia Research Institute (QRI), we don’t talk much about the Hard Problem; instead, we talk about Qualia Formalism, or the idea that any phenomenological state can be crisply and precisely represented by some mathematical object. I suspect this would be a better foil for Brian’s work than the Hard Problem.

Objection 6: Mapping to reality

Brian argues that consciousness should be defined at the functional/computational level: given a Turing machine, or neural network, the right ‘code’ will produce consciousness. But the problem is that this doesn’t lead to a theory which can ‘compile’ to physics. Consider the following:

Imagine you have a bag of popcorn. Now shake it. There will exist a certain ad-hoc interpretation of bag-of-popcorn-as-computational-system where you just simulated someone getting tortured, and other interpretations that don’t imply that. Did you torture anyone? If you’re a computationalist, no clear answer exists- you both did, and did not, torture someone. This sounds like a ridiculous edge-case that would never come up in real life, but in reality it comes up all the time, since there is no principled way to *objectively derive* what computation(s) any physical system is performing.

I don’t think this is an outlandish view of functionalism; Brian suggests much the same in How to Interpret a Physical System as a Mind“Physicalist views that directly map from physics to moral value are relatively simple to understand. Functionalism is more complex, because it maps from physics to computations to moral value. Moreover, while physics is real and objective, computations are fictional and ‘observer-relative’ (to use John Searle’s terminology). There’s no objective meaning to ‘the computation that this physical system is implementing’ (unless you’re referring to the specific equations of physics that the system is playing out).”

Gordon McCabe (McCabe 2004) provides a more formal argument to this effect— that precisely mapping between physical processes and (Turing-level) computational processes is inherently impossible— in the context of simulations. First, McCabe notes that:

[T]here is a one-[to-]many correspondence between the logical states [of a computer] and the exact electronic states of computer memory. Although there are bijective mappings between numbers and the logical states of computer memory, there are no bijective mappings between numbers and the exact electronic states of memory.

This lack of an exact bijective mapping means that subjective interpretation necessarily creeps in, and so a computational simulation of a physical system can’t be ‘about’ that system in any rigorous way:

In a computer simulation, the values of the physical quantities possessed by the simulated system are represented by the combined states of multiple bits in computer memory. However, the combined states of multiple bits in computer memory only represent numbers because they are deemed to do so under a numeric interpretation. There are many different interpretations of the combined states of multiple bits in computer memory. If the numbers represented by a digital computer are interpretation-dependent, they cannot be objective physical properties. Hence, there can be no objective relationship between the changing pattern of multiple bit-states in computer memory, and the changing pattern of quantity-values of a simulated physical system.

McCabe concludes that, metaphysically speaking,

A digital computer simulation of a physical system cannot exist as, (does not possess the properties and relationships of), anything else other than a physical process occurring upon the components of a computer. In the contemporary case of an electronic digital computer, a simulation cannot exist as anything else other than an electronic physical process occurring upon the components and circuitry of a computer.

Where does this leave ethics? In Flavors of Computation Are Flavors of Consciousness, Brian notes that “In some sense all I’ve proposed here is to think of different flavors of computation as being various flavors of consciousness. But this still leaves the question: Which flavors of computation matter most? Clearly whatever computations happen when a person is in pain are vastly more important than what’s happening in a brain on a lazy afternoon. How can we capture that difference?”

But if Brian grants the former point- that “There’s no objective meaning to ‘the computation that this physical system is implementing’”– then this latter task of figuring out “which flavors of computation matter most” is provably impossible. There will always be multiple computational (and thus ethical) interpretations of a physical system, with no way to figure out what’s “really” happening. No way to figure out if something is suffering or not. No consilience; not now, not ever.

Note: despite apparently granting the point above, Brian also remarks that:

I should add a note on terminology: All computations occur within physics, so any computation is a physical process. Conversely, any physical process proceeds from input conditions to output conditions in a regular manner and so is a computation. Hence, the set of computations equals the set of physical processes, and where I say “computations” in this piece, one could just as well substitute “physical processes” instead.

This seems to be (1) incorrect, for the reasons I give above, or (2) taking substantial poetic license with these terms, or (3) referring to hypercomputation (which might be able to salvage the metaphor, but would invalidate many of FRI’s conclusions dealing with the computability of suffering on conventional hardware).

This objection may seem esoteric or pedantic, but I think it’s important, and that it ripples through FRI’s theoretical framework with disastrous effects.


Objection 7: FRI doesn’t fully bite the bullet on computationalism

Brian suggests that “flavors of computation are flavors of consciousness” and that some computations ‘code’ for suffering. But if we do in fact bite the bullet on this metaphor and place suffering within the realm of computational theory, we need to think in “near mode” and accept all the paradoxes that brings. Scott Aaronson, a noted expert on quantum computing, raises the following objections to functionalism:

I’m guessing that many people in this room side with Dennett, and (not coincidentally, I’d say) also with Everett. I certainly have sympathies in that direction too. In fact, I spent seven or eight years of my life as a Dennett/Everett hardcore believer. But, while I don’t want to talk anyone out of the Dennett/Everett view, I’d like to take you on a tour of what I see as some of the extremely interesting questions that that view leaves unanswered. I’m not talking about “deep questions of meaning,” but about something much more straightforward: what exactly does a computational process have to do to qualify as “conscious”?



There’s this old chestnut, what if each person on earth simulated one neuron of your brain, by passing pieces of paper around. It took them several years just to simulate a single second of your thought processes. Would that bring your subjectivity into being? Would you accept it as a replacement for your current body? If so, then what if your brain were simulated, not neuron-by-neuron, but by a gigantic lookup table? That is, what if there were a huge database, much larger than the observable universe (but let’s not worry about that), that hardwired what your brain’s response was to every sequence of stimuli that your sense-organs could possibly receive. Would that bring about your consciousness? Let’s keep pushing: if it would, would it make a difference if anyone actually consulted the lookup table? Why can’t it bring about your consciousness just by sitting there doing nothing?

To these standard thought experiments, we can add more. Let’s suppose that, purely for error-correction purposes, the computer that’s simulating your brain runs the code three times, and takes the majority vote of the outcomes. Would that bring three “copies” of your consciousness into being? Does it make a difference if the three copies are widely separated in space or time—say, on different planets, or in different centuries? Is it possible that the massive redundancy taking place in your brain right now is bringing multiple copies of you into being?



Maybe my favorite thought experiment along these lines was invented by my former student Andy Drucker.  In the past five years, there’s been a revolution in theoretical cryptography, around something called Fully Homomorphic Encryption (FHE), which was first discovered by Craig Gentry.  What FHE lets you do is to perform arbitrary computations on encrypted data, without ever decrypting the data at any point.  So, to someone with the decryption key, you could be proving theorems, simulating planetary motions, etc.  But to someone without the key, it looks for all the world like you’re just shuffling random strings and producing other random strings as output.


You can probably see where this is going.  What if we homomorphically encrypted a simulation of your brain?  And what if we hid the only copy of the decryption key, let’s say in another galaxy?  Would this computation—which looks to anyone in our galaxy like a reshuffling of gobbledygook—be silently producing your consciousness?


When we consider the possibility of a conscious quantum computer, in some sense we inherit all the previous puzzles about conscious classical computers, but then also add a few new ones.  So, let’s say I run a quantum subroutine that simulates your brain, by applying some unitary transformation U.  But then, of course, I want to “uncompute” to get rid of garbage (and thereby enable interference between different branches), so I apply U-1.  Question: when I apply U-1, does your simulated brain experience the same thoughts and feelings a second time?  Is the second experience “the same as” the first, or does it differ somehow, by virtue of being reversed in time? Or, since U-1U is just a convoluted implementation of the identity function, are there no experiences at all here?


Here’s a better one: many of you have heard of the Vaidman bomb.  This is a famous thought experiment in quantum mechanics where there’s a package, and we’d like to “query” it to find out whether it contains a bomb—but if we query it and there is a bomb, it will explode, killing everyone in the room.  What’s the solution?  Well, suppose we could go into a superposition of querying the bomb and not querying it, with only ε amplitude on querying the bomb, and √(1-ε2) amplitude on not querying it.  And suppose we repeat this over and over—each time, moving ε amplitude onto the “query the bomb” state if there’s no bomb there, but moving ε2 probability onto the “query the bomb” state if there is a bomb (since the explosion decoheres the superposition).  Then after 1/ε repetitions, we’ll have order 1 probability of being in the “query the bomb” state if there’s no bomb.  By contrast, if there is a bomb, then the total probability we’ve ever entered that state is (1/ε)×ε2 = ε.  So, either way, we learn whether there’s a bomb, and the probability that we set the bomb off can be made arbitrarily small.  (Incidentally, this is extremely closely related to how Grover’s algorithm works.)


OK, now how about the Vaidman brain?  We’ve got a quantum subroutine simulating your brain, and we want to ask it a yes-or-no question.  We do so by querying that subroutine with ε amplitude 1/ε times, in such a way that if your answer is “yes,” then we’ve only ever activated the subroutine with total probability ε.  Yet you still manage to communicate your “yes” answer to the outside world.  So, should we say that you were conscious only in the ε fraction of the wavefunction where the simulation happened, or that the entire system was conscious?  (The answer could matter a lot for anthropic purposes.)

To sum up: Brian’s notion that consciousness is the same as computation raises more issues than it solves; in particular, the possibility that if suffering is computable, it may also be uncomputable/reversible, would suggest s-risks aren’t as serious as FRI treats them.

Objection 8: Dangerous combination

Three themes which seem to permeate FRI’s research are:

(1) Suffering is the thing that is bad.

(2) It’s critically important to eliminate badness from the universe.

(3) Suffering is impossible to define objectively, and so we each must define what suffering means for ourselves.

Taken individually, each of these seems reasonable. Pick two, and you’re still okay. Pick all three, though, and you get A Fully General Justification For Anything, based on what is ultimately a subjective/aesthetic call.

Much can be said in FRI’s defense here, and it’s unfair to single them out as risky: in my experience they’ve always brought a very thoughtful, measured, cooperative approach to the table. I would just note that ideas are powerful, and I think theme (3) is especially pernicious if incorrect.

III. QRI’s alternative

Analytic functionalism is essentially a negative hypothesis about consciousness: it’s the argument that there’s no order to be found, no rigor to be had. It obscures this with talk of “function”, which is a red herring it not only doesn’t define, but admits is undefinable. It doesn’t make any positive assertion. Functionalism is skepticism- nothing more, nothing less.

But is it right?

Ultimately, I think these a priori arguments are much like people in the middle ages arguing whether one could ever formalize a Proper System of Alchemy. Such arguments may in many cases hold water, but it’s often difficult to tell good arguments apart from arguments where we’re just cleverly fooling ourselves. In retrospect, the best way to *prove* systematized alchemy was possible was to just go out and *do* it, and invent Chemistry. That’s how I see what we’re doing at QRI with Qualia Formalism: we’re assuming it’s possible to build stuff, and we’re working on building the object-level stuff.

What we’ve built with QRI’s framework

Note: this is a brief, surface-level tour of our research; it will probably be confusing for readers who haven’t dug into our stuff before. Consider this a down-payment on a more substantial introduction.

My most notable work is Principia Qualia, in which I lay out my meta-framework for consciousness (a flavor of dual-aspect monism, with a focus on Qualia Formalism) and put forth the Symmetry Theory of Valence (STV). Essentially, the STV is an argument that much of the apparent complexity of emotional valence is evolutionarily contingent, and if we consider a mathematical object isomorphic to a phenomenological experience, the mathematical property which corresponds to how pleasant it is to be that experience is the object’s symmetry. This implies a bunch of testable predictions and reinterpretations of things like what ‘pleasure centers’ do (Section XI; Section XII). Building on this, I offer the Symmetry Theory of Homeostatic Regulation, which suggests understanding the structure of qualia will translate into knowledge about the structure of human intelligence, and I briefly touch on the idea of Neuroacoustics.

Likewise, my colleague Andrés Gómez Emilsson has written about the likely mathematics of phenomenology, including The Hyperbolic Geometry of DMT Experiences, Tyranny of the Intentional Object, and Algorithmic Reduction of Psychedelic States. If I had to suggest one thing to read in all of these links, though, it would be the transcript of his recent talk on Quantifying Bliss, which lays out the world’s first method to objectively measure valence from first principles (via fMRI) using Selen Atasoy’s Connectome Harmonics framework, the Symmetry Theory of Valence, and Andrés’s CDNS model of experience.

These are risky predictions and we don’t yet know if they’re right, but we’re confident that if there is some elegant structure intrinsic to consciousness, as there is in many other parts of the natural world, these are the right kind of risks to take.

I mention all this because I think analytic functionalism- which is to say radical skepticism/eliminativism, the metaphysics of last resort- only looks as good as it does because nobody’s been building out any alternatives.

IV. Closing thoughts

FRI is pursuing a certain research agenda, and QRI is pursuing another, and there’s lots of value in independent explorations of the nature of suffering. I’m glad FRI exists, everybody I’ve interacted with at FRI has been great, I’m happy they’re focusing on s-risks, and I look forward to seeing what they produce in the future.

On the other hand, I worry that nobody’s pushing back on FRI’s metaphysics, which seem to unavoidably lead to the intractable problems I describe above. FRI seems to believe these problems are part of the territory, unavoidable messes that we just have to make philosophical peace with. But I think that functionalism is a bad map, that the metaphysical messes it leads to are much worse than most people realize (fatal to FRI’s mission), and there are other options that avoid these problems (which, to be fair, is not to say they have no problems).

Ultimately, FRI doesn’t owe me a defense of their position. But if they’re open to suggestions on what it would take to convince a skeptic like me that their brand of functionalism is viable, or at least rescuable, I’d offer the following:

Re: Objection 1 (motte-and-bailey), I suggest FRI should be as clear and complete as possible in their basic definition of suffering. In which particular ways is it ineffable/fuzzy, and in which particular ways is it precise? What can we definitely say about suffering, and what can we definitely never determine? Preregistering ontological commitments and methodological possibilities would help guard against FRI’s definition of suffering changing based on context.

Re: Objection 2 (intuition duels), FRI may want to internally “war game” various future scenarios involving AGI, WBE, etc, with one side arguing that a given synthetic (or even extraterrestrial) organism is suffering, and the other side arguing that it isn’t. I’d expect this would help diagnose what sorts of disagreements future theories of suffering will need to adjudicate, and perhaps illuminate implicit ethical intuitions. Sharing the results of these simulated disagreements would also be helpful in making FRI’s reasoning less opaque to outsiders, although making everything transparent could lead to certain strategic disadvantages.

Re: Objection 3 (convergence requires common truth), I’d like FRI to explore exactly what might drive consilience/convergence in theories of suffering, and what precisely makes one theory of suffering better than another, and ideally to evaluate a range of example theories of suffering under these criteria.

Re: Objection 4 (assuming that consciousness is a reification produces more confusion, not less), I would love to see a historical treatment of reification: lists of reifications which were later dissolved (e.g., élan vital), vs scattered phenomena that were later unified (e.g., electromagnetism). What patterns do the former have, vs the latter, and why might consciousness fit one of these buckets better than the other?

Re: Objection 5 (the Hard Problem of Consciousness is a red herring), I’d like to see a more detailed treatment of what kinds of problem people have interpreted the Hard Problem as, and also more analysis on the prospects of Qualia Formalism (which I think is the maximally-empirical, maximally-charitable interpretation of the Hard Problem). It would be helpful for us, in particular, if FRI preregistered their expectations about QRI’s predictions, and their view of the relative evidence strength of each of our predictions.

Re: Objection 6 (mapping to reality), this is perhaps the heart of most of our disagreement. From Brian’s quotes, he seems split on this issue; I’d like clarification about whether he believes we can ever precisely/objectively map specific computations to specific physical systems, and vice-versa. And if so— how? If not, this seems to propagate through FRI’s ethical framework in a disastrous way, since anyone can argue that any physical system does, or does not, ‘code’ for massive suffering, and there’s no principled way to derive any ‘ground truth’ or even pick between interpretations in a principled way (e.g. my popcorn example). If this isn’t the case— why not?

Brian has suggested that “certain high-level interpretations of physical systems are more ‘natural’ and useful than others” (personal communication); I agree, and would encourage FRI to explore systematizing this.

It would be non-trivial to port FRI’s theories and computational intuitions to the framework of “hypercomputation”– i.e., the understanding that there’s a formal hierarchy of computational systems, and that Turing machines are only one level of many– but it may have benefits too. Namely, it might be the only way they could avoid Objection 6 (which I think is a fatal objection) while still allowing them to speak about computation & consciousness in the same breath. I think FRI should look at this and see if it makes sense to them.

Re: Objection 7 (FRI doesn’t fully bite the bullet on computationalism), I’d like to see responses to Aaronson’s aforementioned thought experiments.

Re: Objection 8 (dangerous combination), I’d like to see a clarification about why my interpretation is unreasonable (as it very well may be!).


In conclusion- I think FRI has a critically important goal- reduction of suffering & s-risk. However, I also think FRI has painted itself into a corner by explicitly disallowing a clear, disagreement-mediating definition for what these things are. I look forward to further work in this field.


Mike Johnson

Qualia Research Institute

Acknowledgements: thanks to Andrés Gómez Emilsson, Brian Tomasik, and Max Daniel for reviewing earlier drafts of this.


My sources for FRI’s views on consciousness:
Flavors of Computation are Flavors of Consciousness:
Is There a Hard Problem of Consciousness?
Consciousness Is a Process, Not a Moment
How to Interpret a Physical System as a Mind
Dissolving Confusion about Consciousness
Debate between Brian & Mike on consciousness:
Max Daniel’s EA Global Boston 2017 talk on s-risks:
Multipolar debate between Eliezer Yudkowsky and various rationalists about animal suffering:
The Internet Encyclopedia of Philosophy on functionalism:
Gordon McCabe on why computation doesn’t map to physics:
Toby Ord on hypercomputation, and how it differs from Turing’s work:
Luke Muehlhauser’s OpenPhil-funded report on consciousness and moral patienthood:
Scott Aaronson’s thought experiments on computationalism:
Selen Atasoy on Connectome Harmonics, a new way to understand brain activity:
My work on formalizing phenomenology:
My meta-framework for consciousness, including the Symmetry Theory of Valence:
My hypothesis of homeostatic regulation, which touches on why we seek out pleasure:
My exploration & parametrization of the ‘neuroacoustics’ metaphor suggested by Atasoy’s work:
My colleague Andrés’s work on formalizing phenomenology:
A model of DMT-trip-as-hyperbolic-experience:
June 2017 talk at Consciousness Hacking, describing a theory and experiment to predict people’s valence from fMRI data:
A parametrization of various psychedelic states as operators in qualia space:
A brief post on valence and the fundamental attribution error:
A summary of some of Selen Atasoy’s current work on Connectome Harmonics:

Quantifying Bliss: Talk Summary

Below I provide a summary of the Quantifying Bliss talk at Consciousness Hacking (video360 degree live feed record), which took place on June 7th 2017. I am currently working on a longer and more precise treatment of the topic, which I will be posting here as well. That said, since the talk already makes clear, empirically testable predictions, I decided to publish this summary as soon as possible. After all, there is only a small window of opportunity to publish one’s testable predictions online before the experiment is run and they turn into “retrodictions”. By writing this out and archiving it on time I’m enabling future-me to say “called it!” (if the results are positive) or “at least I tried” (if the experiment fails to show the predicted effects). Better do this quick, then, for science!

The Purpose of Life

We begin by asking the question “what is the purpose of life?”. In order to give a sense for where I am coming from, I explain that I think that the purpose of life is…

  1. To Understand the Universe
  2. To be Happy, and Make Others Happy

I admit that for the first half of my life I thought that the only purpose of life was to understand the universe. If anything, in light of this exclusive goal, happiness could be seen as a temporary distraction rather than something to pursue for its own sake. Thankfully, as a teenager I was exposed to philosophy of mind, was introduced to meditation, and experimented with psychedelics, all of which pointed me to the fact that (a) we don’t understand consciousness yet, and (b) happiness is really a lot more important than we usually think, even if one is only concerned with the most theoretical and abstract level of understanding possible.

I now regard “to understand the universe” and “to be happy and make others happy” on an equal footing. More so, these two life goals complement each other. On the one hand, understanding the universe will allow you to figure out how to make anyone happy. And on the other hand, being happy and making others happy can allow you to stay motivated in order to figure out the nature of reality. Hence one can think of these two life goals as synergistic rather than as being in opposing camps (of course, at the edges, one will be forced to choose one over the other, but we are nowhere near the point where this is a concern).

By taking these two “purposes of life” seriously we are then faced with a crucial question: What makes an experience valuable? In other words, for someone who is both trying to understand the universe and trying to make its inhabitants as happy as possible, the question “how do you measure the value of an experience?” becomes important.

At Qualia Computing we generally answer that question using the following criteria

  1. Does it feel good? (happy, loving, pleasant)
  2. Does it make you productive (in a good way)?
  3. Does it make you ethical?

That is to say, the value that we assign to an experience is guided by three criteria. In brief, a valuable experience is one that feels good (i.e. has positive hedonic tone), improves your productivity (in the sense of helping you pursue your own values effectively), and makes you more ethical – both towards yourself and others. That said, for the purpose of this talk, I make it explicit that I will only discuss how to measure (1). In other words, we will concern ourselves with what makes an experience feel good; ethics and productivity are discussed elsewhere.*

What is Bliss?

So what makes an experience feel good? The “feel good” quality of an experience is usually called valence in psychology and neuroscience (also described as the “pleasure-pain axis”). This quality is to be distinguished from arousal, which refers to the amount of energy expressed in an experience. Four examples: Excitement is a high-valence, high-arousal state. Serenity is a high-valence, low-arousal state. Anxiety is low-valence, high-arousal. And depression low-valence, low-arousal.

For some people valence and arousal are correlated (either negatively or positively as shown by Peter Kuppens). Likewise, one’s culture can have a large influence on the way one conceptualizes of valence (or ideal affect, as demonstrated in the extensive work of Jeanne Tsai). That said, valence is not a cultural phenomenon; even mice can experience negative and positive valence.

Even though valence and arousal do seem to explain a big chunk of the differences between emotions, we can nonetheless find many cases where the “texture” of two emotions feel very different even though their valence and their arousal are similar. Hence we ask ourselves: How do we explain and characterize the textural differences between such emotions?

And across all of the possible intensely blissful states on offer (encompassing all of the possible inner meanings present), what exactly is shared between them all at their very core?

Some interpret holistic feelings of wellbeing as a sort of spiritual signal. In this interpretation, feeling at a very deep level that the world is good, that things fall into place perfectly, that you don’t owe anything to anyone, etc. is a sign that you are on the right (spiritual) track. Undoubtedly many people use the (often extreme) positive shift in their valence upon religious conversion as evidence of the validity of their choice. Intense positive valence may not throw Bayesian purists off-balance, but for the rest of the world, blissful experiences are often found as cornerstones of worldviews.

Other people say that bliss is “just chemicals in your brain”. Some claim that it’s more a matter of the functional state of your pleasure centers (themselves affected by dopamine, opioids, etc.) rather than the chemicals themselves. Many others are focused on what usually triggers happiness (e.g. learning, relationships, beliefs, etc.) rather than on what, absolutely, needs to happens for bliss to take place in the simplest experiential terms possible. Most who study this closely become mystics.

Could it be that there’s something structural that makes the experiences feel good? Let’s say that there exists a good-fitting mathematical object that translates brain states to experiences. What mathematical property of that object would valence look like? Our proposal is very simple. In some sense it is the simplest possible theory for the important theory of consciousness. We propose the symmetry theory of valence.

(The important theory of consciousness is the question that asks why experience feels good and/or bad, vs. e.g. the hard problem of consciousness, why consciousness exists to begin with).

The Symmetry Theory of Valence

We are pretty confident that consciousness is a real and a measurable phenomenon. That’s why Consciousness Hacking is such a good venue for this kind of discussion. Because here we can talk freely about the properties of consciousness without getting caught up about whether it exists at all. Now, symmetry is a very general term, how is that precise?

Harmony feels good because it’s symmetry over time. In reality, our moments of experience contain a temporal direction. I call this a pseudo-time arrow, since its direction is likely encoded in the patterns of statistical independence between the qualia experienced. And by manipulating the symmetrical connectivity of the micro-structure of one’s consciousness, one can change the perception of time. It’s a change in the way one evaluates when one is and how fast one is going. 

In this model, the pleasure centers would work as “tuning knobs” of harmonic patterns. They are establishing the mood, the underlying tone to which the rest needs to adapt. And the emotional centers, including the amygdala, would be strategically positioned to add anti-symmetry instead. Hence, in this framework we would think of boredom is an “anti-symmetry” mechanism. It prevents us from getting stuck in shallow ponds, but it can be nasty if left unchecked. Cognitive activity may be in part explained by differences in boredom thresholds.

Connectome-Specific Harmonics

I was at the Psychedelic Science 2017 conference when I saw Selen Atasoy presenting about improvisation enhancement with LSD. She used a paradigm previously developed, whose methods and empirical tests were published in Nature in 2016 but now applied to psychedelic research. For a good introduction check out the partial transcript of her talk.

In her talk she shows how one can measure the various “pure harmonics” in a given brain. The core idea is that brain activity can be interpreted as a weighted sum of “natural resonant frequencies” for the entire connectome (white matter tracks together with the grey matter connections). They actually take the physical structure of a mapped brain and simulate the effect of applying the excitation-inhibition differential equations known for collective neural activity propagation. Then they infer the presence and prevalence of these “pure harmonics” in a brain at a given point in time using a probabilistic reconstruction.

Chladni plates here are a wonderful metaphor for these brain harmonics. This is because the way the excitation-inhibition wavefront propagates is very similar in both Chladni plates and human brains. In both cases the system drifts slowly within the attractor basin of natural frequencies, where the wavefront wraps around the medium an integer number of times. I was in awe to see her approach applied to psychedelic research. After all, Qualia Computing has indeed explored harmonic patterns in psychedelic experiences (ex. 1, ex. 2, ex. 3), and the connection was made explicit in Principia Qualia (via the concept of neuroacoustic modulation).


But how do these harmonics look like in the brain? Show me a brain!


Notice the traveling wave wrapping around the brain an integer number of times in each of these numerical solutions (source). The work by these labs is incredible, and they seem to show that the brain’s activity can be decomposed into each of these harmonics.

At the Psychedelic Science 2017 conference, Selen Atasoy explained that very low frequency harmonics were associated with Ego Dissolution in the trials that they studied. She also explained that emotional arousal, here defined as one’s overall level of energy in the emotional component (i.e. anxiety and ecstasy vs. depression and serenity), also correlated with low frequency harmonic states. On the other hand, high valence states were correlated with high frequency brain harmonics.

These empirical results are things that I claim we could have predicted with the symmetry theory of valence. I then thought to myself: let’s try to come up with other predictions. How should we consider the mixture of various harmonics, beyond merely their individual presence? How can we reconstruct valence from this novel data-structure for representing brain-states?

The Algorithm for Quantifying Bliss

Starting my reasoning from first principles (sourced from the Symmetry Theory of Valence), the natural way to take a data-structure that represents states of consciousness and recover its valence (in cases where samples occur across time in addition to space), is to try to isolate the noise, then proceed to quantify the dissonance, and what remains becomes what’s consonant. Basically, one will estimate the rough amount of symmetry (over time), as well as the degree of anti-symmetry, and the level of noise total.

In other words, I prophesize that we can get an “affective signature” of any brain state by applying an algorithm to fMRI brain recordings in order to estimate the degree of (1) consonance, (2) dissonance, and (3) noise within and across the brain’s natural harmonic states. This will result in what I call “Consonance-Dissonance-Noise Signatures” of brain states (“CDNS” for short) consisting of three histograms that describe the spectra of consonance, dissonance, and noise in a given moment of experience. The algorithm to arrive at a CDNS of a brain state is as follows:

Remove some of the noise in the brain state by applying the technique in Atasoy (2016) and recovering the distribution of the best approximation possible for the harmonics present (you may apply some further denoising on the harmonics when taken as a collective). Then estimate the total dissonance of the combination of harmonics by taking each pair of harmonics and quantifying their mutual dissonance. Finally, subtract the dissonance from “all of the interactions that could have existed” and what’s left ends up being the consonance. This way you obtain a Consonance, Dissonance, Noise Signature.


Each of these three components will have their associated spectral power distribution. The noise spectrum is obtained during the first denoising step (as whatever cannot be explained by the harmonic decomposition). Then the dissonance spectrum is a function of the minimum power of pairs of harmonics that exist within the critical band of each other (see slides 18; possibly upgraded by 20), as well as the frequencies of the beating patterns.

Quantifying Dissonance?

In order to quantify dissonance we use a method that may end up being simpler than what you need to calculate dissonance for sound! E.g. in Quantifying the Consonance of Complex Tones With Missing Fundamentals (Chon 2008) we learn that the human auditory system may at times detect dissonance even when there is no actual dissonance in the input. That is, there are auditory illusions pertaining to valence and dissonance. Based on the missing fundamental one can create ghost dissonance between tones that are not even present. That said, quantifying dissonance in a brain in terms of its harmonic decomposition may be easier than quantifying dissonance in auditory input, precisely because the auditory input (and any sensory input for that matter) contains many intermediary pre-processing steps. The auditory system is relatively “direct” when compared to, e.g. the visual system, but you will still see some basic signal processing done to the input before it influences brain harmonics. The sensory systems, being adapted to meet the criteria of both interfacing with a functioning valence system and representing the information adequately (in terms of the real-world distribution of inputs) serve the function of translating the inputs into usable signals. I.e. frequency-based descriptions, often log-transformed, in order to arrive at valence gradients. For this reason, the algorithm that describes how to extract valence out of a brain state may turn out to be simpler than what you need to predict the hedonic quality of patterns of sound (or sight, touch, etc).

In brief, we propose that we can compute the approximate amount of dissonance between these harmonics by seeing how close they are in terms of spatial and temporal frequencies. If they are within the critical window then they will be considered as dissonant. There is likely to be a peak dissonance window, and when any pair of harmonic states live within that window, then experiencing both at once may feel really awful (to quantify such dissonance more precisely we would use a dissonance function as shown in Chon 2008). If indeed symmetry is intimately connected to valence, then highly anti-symmetrical states such as what’s produced by overlapping brain harmonics within the critical band may feel terrible. Remember, harmony is symmetry over time. So dissonance is anti-symmetry over time. It’s worth recalling, though, that in the absence of dissonance and noise, by default, what remains is consonance.

Visualizing Emotions as CDNS’s of States of Consciousness

Above you can find two ways of visualizing a CDNS. Before we go on to the predictions, here we illustrate how we think that we will be able to see at a glance the valence of a brain with our method. The big circle shows the dissonance and consonance for each of the brain harmonics (the black dots surrounding the circle represent the weights for each state). If you want the overall dissonance in a given state, you add up the red-yellow arrows, whereas if you want the total consonance, you add the purple-light-blue arrows. The triangles on the right expand upon the valence diagram presented in Principia Qualia. Namely, we have a blue (positive valence/consonant), red (negative valence/dissonant), and grey (neutral valence/noise) component in a state of consciousness. Each of these components has a spectrum; the myriad textures of emotional states are the result of different spectral signatures for hedonically loaded patterns.

Testable Predictions

Quantifying Bliss (27)

We predict that intense emotions/experiences reported on psychedelics will result in states of consciousness whose harmonic decomposition will show a high amount of energy to be found in the pure harmonics (this was already found in 2017 as explained in the presentation, so let’s count that as a retrodiction). People who report being “very high” will have particularly high amounts of energy in their pure harmonics (as opposed to more noisy states).

The predicted valence for their experiences will be a function of the particular patterns (in terms of relative weights) of the various harmonics. Those which generate highly harmonic CDNS will be blessed with high valence experiences. And those who experience high dissonance, as empirically measured, will report negative feelings (e.g. fear, anxiety, nausea, weird and unpleasant body load, etc). In particular, we can explore the shape of highly harmonic states. In this framework, MDMA would be seen as likely to work by increasing the energy expressed by an exceptionally consonant set of harmonics in the brain.

A point to make here is that predicting “pure harmonics” on psychedelics (evidently simple and ordered patterns), would seem to go counter to the recently accrued empirical data concerning entropy in the tripping brain.** But we also know that the psychedelic brain can produce ridiculously self-similar near-informationless yet highly intense moments of experience preceded by a symmetrification process. Indeed, there are several symmetric attractors for the interplay of awareness and attention at various levels of “consciousness energy” and quality of mood. These states, in turn, not only are hedonically charged, but also allow the exploration of high-energy qualia research (since the implicit symmetry provides an energy seal). Highly energetic states of consciousness can be encapsulated in a highly symmetrical network of local binding. More about this in a future article.

On the other hand, we predict that people on SSRIs will show an enhanced amount of noise in their CDNS. A couple of slides back, this was represented as a higher loading of activity in the grey component of the triangular visualization of a CDNS. Likewise, some drugs will have various effects on the CDNS, such as stimulants inducing more consonance in high frequencies, whereas opioids and hypnotics having signatures of inducing high consonance in the low frequencies.

Summary of Predictions About Drug Effects

  1. Psychedelic substances will increase the overall power of the brain’s pure harmonics, and thus result in a CDN Signature characterized by: (a) high consonance of all frequencies, (b) high dissonance of all frequencies, and (c) low noise of all frequencies. Criticality will be observed by way of the CDNS having high variance.
  2. MDMA will produce a very specific range of states that have on the one hand very pure harmonic states of high frequencies, and on the other, very small collective dissonance and noise. In other words: (a) high amounts of high-frequency consonance, (b) low amounts of dissonance of all frequencies, and (c) low noise of all frequencies.
  3. Any “affect blunting” agent such as SSRIs, ibuprofen, aspirin, acetaminophen, and agmatine, will produce CDNS characterized by: (a) reduced consonance of all frequencies, (b) reduced dissonance of all frequencies, and (c) increased noise in either some or all frequencies. We further hypothesize that different antidepressants (e.g. citalopram vs. fuoxetine) will look the same with respect to reducing the C and D components, but may have differences in the way they increase the N spectrum.
  4. Opioids in euphoric doses will be found to (a) increase low frequency consonance, (b) decrease dissonance for all frequencies but especially the high frequencies, and (c) slightly increase noise across the board.
  5. Stimulants will be found to (a) increase medium and high frequency consonance, (b) leave dissonance fairly unaltered, and (c) reduce noise for all frequencies but especially those in the upper end of the spectrum.

Predictions About Emotions

For now, here are the specific predictions concerning emotions that I am making:

  1. The energy of the consonant (C) component of a CDNS will be highly correlated with the amount of euphoria (pleasure, happiness, positive feelings, etc.) a person is experiencing.
  2. The energy of the dissonant (D) component will have a high correlation with the amount of dysphoria (pain, suffering, negative feelings, etc.) a person feels.
  3. The energy of the noise (N) component will be correlated with flattened affect and blunted valence (i.e. feeling neither good nor bad, like there is a fog that masks all feelings).
  4. If one creates a geometric representation of the relationships between various brain states using their respective CDNS similarities as a distance metric for emotional states using Multi-Dimensional Scaling (MDS) techniques, one will be able to recover a really good approximation of the empirically-derived dimensional models of emotions (cf. dimensional models of emotionWire-heading Done Right). In other words, if you ask your participants to tell you how they feel during the fMRI sessions and then associate those emotions to their instantaneous CDNS, and then you apply multidimensional scaling to the resulting CDNS, you will be able to recover a good dimensional picture of the state-space of emotions. I.e. “subjective similarity between emotions” will be closely tracked by the geometric distance between their corresponding CDNS:
    1. Applying MDS scaling to the C component of the CDNS will result in a better characterization of the differences between positive emotions.
    2. Applying MDS to the D component will result in a better characterization of the differences between negative emotions. And,
    3. Applying MDS to the N component will result in a better characterization of the differences between valence-neutral emotions.

The Future of Mental Health

Quantifying Bliss (28)

Sir, your 17th harmonic is really messing up the consonance of your 19th harmonic, and it interrupts the creative morning mood you recently enjoyed. I suggest taking 1mg of Coluracetam, listening to a selection of Diamond songs, and RD23 [stretching exercise]. Here’s your expected CDNS.

Penfield mood organs may not be as terrible as they seem. At least not if you’re given a good combination of personalized settings, and a manual to wire-head in the proper manner. In such a situation, among the options available, you will have the ability to choose an experientially attractive, healthy, and sustainable set of moods indefinitely.

The “clinical phenomenologist” of the year 2050 might look into your brain harmonics, and try to find the shortest paths to nearby state-spaces with less chronic dissonance, fishing for high-consonance attractors with large basins to shoot for. The qualia expert would go on to provide you various options that may improve all sorts of metrics, including valence, the most important of them all. If you ask, your phenomenologist can give you trials for fully reversible treatments. You sample them in your own time, of course, and test them for a day or two before deciding whether to use these moods for longer.

Personalized Harmonic Retuning

I assume that people will be given just about enough retuning to get back to their daily routines as they themselves prefer them, but without any sort of nagging dissonance. Most people will probably continue on with their preference architectures relatively unchanged. Indeed, that will be a valued quality for a personalized harmonic retuning product. Having adequate mood devices that don’t mess up your existing value system might eventually become a highly understood, precision-engineered aspect of mainstream mental health. At least compared to the current (pre-psychedelic re-adoption 2017) paradigms. Arguably, even psychedelic therapy is pretty blunt in a way. Not in the sense of blunting the hedonic quality of your experience (on the contrary). But in the sense of applying the harmonization process indiscriminately.

For the psychonauts (hopefully they are not too rare by then), who still want to investigate consciousness even though human life is already full of love (in the future), we will have a different arrangement. They are free to explore themselves while being part of a research institute. Indeed, pursuing the purpose of understanding the big picture (including consciousness) will require the experimental method. More so, exploring the state-space of consciousness will, for the foreseeable future, be a way to find new ways of making others happy. People will continue to explore alien state-spaces in the search of highly-priced high-valence states. At least for some scores of generations valence engineering is bound to continue to be economically profitable. As we discover new drugs, new treatments, new philosophical trances, new interpretations and expressions of love, and so on, the economy will adapt to these inventions. We already live in an informational economy of states of consciousness, and the future is likely to be like that as well. Except that consciousness technologies will be immensely more powerful.

Barring the unlikely emergence of anti-hedonist Spartan self-punishing transhumanist social movements enabled with genetic technology, I don’t anticipate major obstacles in the eventual widespread use of mood organs. In fact, the wide adoption of SSRIs in some pockets of society shows that the general public is willing and interested in minor self-adjustments to deal with chronic negativity. Hedonic technology is in its early days, but with a root understanding of the nature of valence, the sky is the limit.

Case studies – SSRIs & Psychedelics

Let’s take a closer look at SSRIs and psychedelics in light of the Symmetry Theory of Valence.

SSRIs have an overall effect of blunting one’s experience at pretty much every level imaginable. Usually just a little, enough to help people re-establish a new order between their harmonics, in a more noisy, less intense range of moods. Some people may benefit from this sort of intervention. Now, also it’s worth pointing out the possible side effects, which have the common theme of reducing the structural integrity of the micro-structure of consciousness. Thus, the highly ordered pleasant and unpleasant experiences get softened. Whether this generalized softening is beneficial depends on many factors. Psychonauts usually avoid them as much as possible in order to protect the psychoacoustical potential of their brain, were they to desire to use this potential sometime in the future.

Psychedelics, in this framework, would be interpreted as neuroacoustic enhancers. These agents trigger, via control interruption, a more “echo-ey acoustic environment for one’s consciousness”. Meaning, any qualia experienced under the influence lasts for longer (the decay of intensity of experience as a function of time since presentation of stimuli becomes a lot “slower” or “fatter”). On high doses, the intensity of each component of a cycle of an experience can feel just as intense, and thus one might find oneself unable to locate oneself in time. Sometimes intense feelings return cyclically, and ultimately at strong doses, experiential feedback dominates every aspect of one’s experience, and there isn’t anything other than standing waves of synesthetic psychedelic feelings.

Peak symmetry states with their associated valence would be predicted to be far more accessible on highly harmonic states of consciousness. So psychedelics and the like could be carefully used to explore the positive extreme of valence: Hyper-symmetrical states. That said, for responsible exploration, a euphoriant will be needed to prevent negative psychedelic experiences.

Final Thoughts

A Harmonic Society is a place where everyone recognizes what makes other sentient beings love life. It’s a place in which everyone deeply understands the valence landscapes of other beings. People in such a society would know that a zebra, an owl, and a salamander all share the pursuit of harmonic states of consciousness, albeit in their own, often different-looking, state-spaces of qualia. We would understand each other far more deeply if we saw each other’s valence landscapes as part of a big state-space of possible preference architectures. Ultimately, the pursuit of existential bliss and the ontological question (why being?) would incite us to explore each other through consciousness technologies. We will have an expanded state-space of available possible moods, both individual and collective, increasing our chances of finding a new revolutionary understanding of consciousness, identity, and what’s possible for post-hedonium societies.

*I will note that to define what’s ethical one ultimately relies on beliefs about personal identity; truly frame-independent systems of morality are exceptionally hard to construct.

**The Entropic Brain theory portrays psychedelia in terms of increased entropy, but also, and most importantly, focuses on criticality. Just thinking about entropy would not distinguish between adding white noise and adding interesting patterns. In other words, from the point of view of simple entropy without any spectral (or nonlinear) analysis, SSRIs and psychedelics are doing pretty much the same thing. So the sense of “entropy” that matters will have to be a lot more detailed, showing you in what way the information encoded in normal states of consciousness changes as a function of entropy added in various ways.

On psychedelics one does indeed find highly ordered crystal-like states of consciousness (which I’ve described elsewhere as peak symmetry states), and as far as we know those states are also some of the most positively hedonically charged. Hence, at least in terms of describing the quality of the psychedelic experience, leaving symmetry out would make us miss an important big-picture kind of quality for psychedelics in general and their connection to valence variance.


***→ see quote →

My hypothesis strongly implies that ‘hedonic’ brain regions influence mood by virtue of acting as ‘tuning knobs’ for symmetry/harmony in the brain’s consciousness centers. Likewise, nociceptors, and the brain regions which gate & interpret their signals, will be located at critical points in brain networks, able to cause large amounts of salience-inducing antisymmetry very efficiently. We should also expect rhythm to be a powerful tool for modeling brain dynamics involving valence- for instance, we should be able to extend (Safron 2016)’s model of rhythmic entrainment in orgasm to other sorts of pleasure.

– Michael Johnson in Principia Qualia, page 52

Connectome-Specific Harmonic Waves on LSD

The harmonics-in-connectome approach to modeling brain activity is a fascinating paradigm. I am privileged to have been at this talk in the 2017 Psychedelic Science conference. I’m extremely happy find out that MAPS already uploaded the talks. Dive in!

Below is a partial transcript of the talk. I figured that I should get it in written form in order to be able to reference it in future articles. Enjoy!

[After a brief introduction about harmonic waves in many different kinds of systems… at 7:04, Selen Atasoy]:

We applied the [principle of harmonic decomposition] to the anatomy of the brain. We made them connectome-specific. So first of all, what do I mean by the human connectome? Today thanks to the recent developments in structural neuroimaging techniques such as diffusion tensor imaging, we can trace the long-distance white matter connections in the brain. These long-distance white matter fibers (as you see in the image) connect distant parts of the brain, distant parts of the cortex. And the set of all of the different connections is called the connectome.


Now, because we know the equation governing these harmonic waves, we can extend this principle to the human brain by simply solving the same equation on the human connectome instead of a metal plate (Chladni plates) or the anatomy of the zebra. And if you do that, we get a set of harmonic patterns, this time emerging in the cortex. And we decided to call these harmonic patterns connectome harmincs. And each of these connectome harmonic patterns are associated with a different frequency. And because they correspond to different frequencies they are all independent, and together they give you a new language, so to speak, to describe neural activity. So in the same way the harmonic patterns are building blocks of these complex patterns we see on animal coats, these connectome harmonics are the building blocks of the complex spatio-temporal patterns of neural activity.

Describing and explaining neural activity by using these connectome harmonics as brain states is really not very different than decomposing a complex musical pieces into its musical notes. It’s simply a new way of representing your data, or a new language to express it.

What is the advantage of using this new language? So why not use the state-of-the-art conventional neurimaging analysis methods? Because these connectome harmonics, by definition are the vibration modes, but applied to the anatomy of the human brain, and if you use them as brain states to express neural activity we can compute certain fundamental principles very easily such as the energy or the power.

The power would be the strength of activation of each of these states in neural activity. So how strongly that particular state contributes to neural activity. And the energy would be a combination of this strength of activation with the intrinsic energy of that particular brain state, and the intrinsic energy comes from the frequency of its vibration (in the analogy of vibration).

So in this study we looked at the power and the energy of these connectome harmonic brain states in order to explore the neural correlates of the LSD experience.

We looked at 12 healthy participants who received either 75µg of LSD (IV) or a placebo, over two sessions. These two sessions were 14 days apart in counter-balanced order. And the fMRI scans consisted of 3 eyes-closed resting states scans, each lasting 7 minutes, in the first and the third scan the participants were simply resting, eyes closed, but in the second scan they were also listening to music. And after each scan, the participants rated the intensity of certain experiences.


So if you look at, firstly, at the total power and the total energy of each of these scans under LSD and placebo, what we see is that under LSD both the power as well as the energy of brain activity increases significantly.

And if we compute the probability of observing a certain energy value on LSD or placebo, what we see is that the peak of this probability distribution clearly shoots towards high energy values under LSD.


And that peak is even slightly higher in terms of probability when the subjects were listening to music. So if we interpret that peak as, in a way, the characteristic energy of a state, you can see that it shifts towards higher energy under LSD, and that this effect is intensified when listening to music.

And then we asked, which of these brain states, which of these frequencies, were actually contributing to this energy increase. So we partitioned the spectrum of all of these harmonic brain states into different parts and computed the energy of each of these partitions individually. So in total we have around 20,000 brain states. And if you look at the energy differences in LSD and placebo, what we find is that for a very narrow range of low frequencies actually these brain states were decreasing their energy on LSD. But for a very broad range of high frequencies, LSD was inducing an energy increase. So this says that LSD alters brain dynamics in a very frequency-selective manner. And it was causing high frequencies to increase their energy.

So next we looked at whether these changes we are observing in brain activity are correlated with any of the experiences that the participants themselves were having in that moment. If you look at the energy changes within the narrow range of low frequencies, we found that the energy changes in that range significantly correlated with the intensity of the experience of ego dissolution. The loss of subjective self.


And very interestingly, the same range of energy change within the same frequency range also significantly correlated with the intensity of emotional arousal, whether the experience was positive or negative. This could be quite relevant for studies looking into potential therapeutic applications of LSD.


Next, when we look at a slightly higher range of frequencies, what we found was that the energy changes within that range significantly correlated with the positive mood.


In brief, this suggests that it’s rather the low frequency brain states which correlated with ego dissolution or with emotional arousal, and it’s the activity of higher frequencies that is correlated with the positive experiences.

Next, we wanted to check the size of the repertoire of active brain states. And if you look at the probability of activation for any brain state (so we are not distinguishing for any frequency brain states), what we observe is that the probability of a brain state being silent (zero contribution), actually decreased under LSD. And the probability of a brain state contributing very strongly, which corresponds to the tails of these distributions, were increased under LSD. So this suggests that LSD was activating more brain states simultaneously.


And if we go back to the music analogy that we used in the beginning, that would correspond to playing more musical notes at the same time. And it’s very interesting, because studies that have looked at improvising, those who have looked at jazz improvisation, show that improvising jazz musicians play significantly more musical notes compared to memorized play. And this is what we seem to be finding under the effect of LSD. That your brain is actually activating more of these brain states simultaneously.


And it does so in a very non-random fashion. So if you look at the correlation across different frequencies. Like at the co-activation patterns, and their activation over time. You may interpret it as the “communication across various frequencies”. What we found is that for a very broad range of the spectrum, there was a higher correlation across different frequencies in their activation patterns under LSD compared to placebo.

So this really says that LSD is actually causing a reorganization, rather than a random activation of brain states. It’s expanding the repertoire of active brain states, while maintaining -or maybe better said- recreating a complex but spontaneous order. And in the musical analogy it’s really very similar to jazz improvisation, to think about it in an intuitive way.

Now, there is actually one particular situation when dynamical systems such as the brain, and systems that change their activity over time, show this type of emergence of complex order, or enhanced improvisation, enhanced repertoire of active states. And this is when they approach what is called criticality. Now, criticality is this special type of behavior, special type of dynamics, that emerges right at the transition between order and chaos. When these two (extreme) types of dynamics are in balance. And criticality is said to be “the constantly shifting battle zone between stagnation and anarchy. The one place where a complex system can be spontaneous, adaptive, and alive” (Waldrop 1992). So if a system is approaching criticality, there are very characteristic signatures that you would observed in the data, in the relationships that you plot in your data.

And one of them is -and probably the most characteristic of them- is the emergence of power laws. So what does that mean? If you plot one observable in our data, which for example, in our case would be the maximum power of a brain state, in relationship to another observable, for example, the wavenumber, or the frequency of that brain state, and you plot them in logarithmic coordinates, that would mean that if they follow power laws, they would approximate a line. And this is exactly what we observe in our data, and surprisingly for both LSD as well as for placebo, but with one very significant and remarkable difference: because the high frequencies increase their power on LSD, this distribution follows this power law, this line, way more accurately under LSD compared to placebo. And here you see the error of the fit, which is decreasing.

This suggests that LSD shoots brain dynamics further towards criticality.  The signature of criticality that we find in LSD and in placebo is way more enhanced, way more pronounced, under the effect of LSD. And we found the same effect, not only for the maximum power, but also for the mean power, as well as for the power of fluctuations.


So this suggests that the criticality actually may be the principle that is underlying this emergence of complex order, and this reorganization of brain dynamics, and which leads to enhanced improvisation in brain activity.

So, to summarize briefly, what we found was that LSD increases the total power as well as total energy of brain activity. It selectively activates high frequency brain states, and it expands the repertoire or active brain states in a very non-random fashion. And the principle underlying all of these changes seems to be a reorganization of brain dynamics, right at criticality, right at the edge of chaos, or just as the balance between order and chaos. And very interestingly, the “edge of chaos”, or the edge of criticality, is said to be where “life has enough stability to sustain itself, and enough creativity to deserve the name of life” (Waldrop 1992). So I leave you with that, and thank you for your attention.

[Applauses; ends at 22:00, followed by Q&A]

ELI5 “The Hyperbolic Geometry of DMT Experiences”


I wrote the following in response to a comment on the r/RationalPsychonaut subreddit about this DMT article I wrote some time ago. The comment in question was: “Can somebody eli5 [explain like I am 5 years old] this for me?” So here is my attempt (more like “eli12”, but anyways):

In order to explain the core idea of the article I need to convey the main takeaways of the following four things:

  1. Differential geometry,
  2. How it relates to symmetry,
  3. How it applies to experience, and
  4. How the effects of DMT turn out to be explained (in part) by changes in the curvature of one’s experience of space (what we call “phenomenal space”).

1) Differential Geometry

If you are an ant on a ball, it may seem like you live on a “flat surface”. However, let’s imagine you do the following: You advance one centimeter in one direction, you turn 90 degrees and walk another centimeter, turn 90 degrees again and advance yet another centimeter. Logically, you just “traced three edges of a square” so you cannot be in the same place from which you departed. But let’s says that you somehow do happen to arrive at the same place. What happened? Well, it turns out the world in which you are walking is not quite flat! It’s very flat from your point of view, but overall it is a sphere! So you ARE able to walk along a triangle that happens to have three 90 degree corners.

That’s what we call a “positively curved space”. There the angles of triangles add up to more than 180 degrees. In flat spaces they add up to 180. And in “negatively curved spaces” (i.e. “negative Gaussian curvature” as talked about in the article) they add up to less than 180 degrees.


Eight 90-degree triangles on the surface of a sphere

So let’s go back to the ant again. Now imagine that you are walking on some surface that, again, looks flat from your restricted point of view. You walk one centimeter, then turn 90 degrees, then walk another, turn 90 degrees, etc. for a total of, say, 5 times. And somehow you arrive at the same point! So now you traced a pentagon with 90 degree corners. How is that possible? The answer is that you are now in a “negatively curved space”, a kind of surface that in mathematics is called “hyperbolic”. Of course it sounds impossible that this could happen in real life. But the truth is that there are many hyperbolic surfaces that you can encounter in your daily life. Just to give an example, kale is a highly hyperbolic 2D surface (“H2” for short). It’s crumbly and very curved. So an ant might actually be able to walk along a regular pentagon with 90-degree corners if it’s walking on kale (cf. Too Many Triangles).


An ant walking on kale may infer that the world is an H2 space.

In brief, hyperbolic geometry is the study of spaces that have this quality of negative curvature. Now, how is this related to symmetry?

2) How it relates to symmetry

As mentioned, on the surface of a sphere you can find triangles with 90 degree corners. In fact, you can partition the surface of a sphere into 8 regular triangles, each with 90 degree corners. Now, there are also other ways of partitioning the surface of a sphere with regular shapes (“regular” in the sense that every edge has the same length, and every corner has the same angle). But the number of ways to do it is not infinite. After all, there’s only a handful of regular polyhedra (which, when “inflated”, are equivalent to the ways of partitioning the surface of a sphere in regular ways).


If you instead want to partition a plane in a regular way with geometric shapes, you don’t have many options. You can partition it using triangles, squares, and hexagons. And in all of those cases, the angles on each of the vertices will add up to 360 degrees (e.g. six triangles, four squares, or thee corners of hexagons meeting at a point). I won’t get into Wallpaper groups, but suffice it to say that there are also a limited number of ways of breaking down a flat surface using symmetry elements (such as reflections, rotations, etc.).


Regular tilings of 2D flat space

Hyperbolic 2D surfaces can be partitioned in regular ways in an infinite number of ways! This is because we no longer have the constraints imposed by flat (or spherical) geometries where the angles of shapes must add up to a certain number of degrees. As mentioned, in hyperbolic surfaces the corners of triangles add up to less than 180 degrees, so you can fit more than 6 corners of equilateral triangles at one point (and depending on the curvature of the space, you can fit up to an infinite number of them). Likewise, you can tessellate the entire hyperbolic plane with heptagons.


Hyperbolic tiling: Each of the heptagons is just as big (i.e. this is a projection of the real thing)

On the flip side, if you see a regular partitioning of a surface, you can infer what its curvature is! For example, if you see that a surface is entirely covered with heptagons, three on each of the corners, you can be sure that you are seeing a hyperbolic surface. And if you see a surface covered with triangles such that there are only four triangles on each joint, then you know you are seeing a spherical surface. So if you train yourself to notice and count these properties in regular patterns, you will indirectly also be able to determine whether the patterns inhabit a spherical, flat, or hyperbolic space!

3) How it applies to experience

How does this apply to experience? Well, in sober states of consciousness one is usually restricted to seeing and imagining spherical and flat surfaces (and their corresponding symmetric partitions). One can of course look at a piece of kale and think “wow, that’s a hyperbolic surface” but what is impossible to do is to see it “as if it were flat”. One can only see hyperbolic surfaces as projections (i.e. where we make regular shapes look irregular so that they can fit on a flat surface) or we end up contorting the surface in a crumbly fashion in order to fit it in our flat experiential space. (Note: even sober phenomenal space happens to be based on projective geometry; but let’s not go there for now.)

4) DMT: Hyperbolizing Phenomenal Space

In psychedelic states it is common to experience whatever one looks at (or, with more stunning effects, whatever one hallucinates in a sensorially-deprived environment such as a flotation tank) as slowly becoming more and more symmetric. Symmetrical patterns are attractors in psychedelia. It’s common for people to describe their acid experiences as “a kaleidoscope of colors and meaning”. We should not be too quick to dismiss these descriptions as purely metaphorical. As you can see from the article Algorithmic Reduction of Psychedelic States as well as PsychonautWiki’s Symmetrical Texture Repetition, LSD and other psychedelics do in fact “symmetrify” the textures you experience!


What gravel might look like on 150 mics of LSD (Source)

As it turns out, this symmetrification process (what we call “lowering the symmetry detection/propagation threshold”) does allow one to experience any of the possible ways of breaking down spherical and flat surfaces in regular ways (in addition to also enabling the experience of any wallpaper group!). Thus the surfaces of the objects one hallucinates on LSD (specially for Closed Eyes Visuals), are usually carpeted with patterns that have either spherical or flat symmetries (e.g. seeing honeycombs, square grids, regular triangulations, etc.; or seeing dodecahedra, cubes, etc.).


17 wallpaper symmetry groups

Only on very high doses of classic psychedelics does one start to experience objects that have hyperbolic curvature. And this is where DMT becomes very relevant. Vaping it is one of the most efficient ways of achieving “unworldly levels of psychedelia”:

On DMT the “symmetry detection threshold” is reduced to such an extent that any surface you look at very quickly gets super-saturated with regular patterns. Since (for reasons we don’t understand) our brain tries to incorporate whatever shape you hallucinate into the scene as part of the scene, the result of seeing too many triangles (or heptagons, or whatever) is that your brain will “push them into the surfaces” and, in effect, turn those surfaces into hyperbolic spaces.HeptagonsIndrasPearls

Yet another part of your brain (or system of consciousness, whatever it turns out to be) recognizes that “wait, this is waaaay too curved somehow, let me try to shape it into something that could actually exist in my universe”. Hence, in practice, if you take between 10 and 20 mg of DMT, the hyperbolic surfaces you see will become bent and contorted (similar to the pictures you find in the article) just so that they can be “embedded” (a term that roughly means “to fit some object into a space without distorting its properties too much”) into your experience of the space around you.

But then there’s a critical point at which this is no longer possible: Even the most contorted embeddings of the hyperbolic surfaces you experience cannot fit any longer in your normal experience of space on doses above 20 mg, so your mind has no other choice but to change the curvature of the 3D space around you! Thus when you go from “very high on DMT” to “super high on DMT” it feels like you are traveling to an entirely new dimension, where the objects you experience do not fit any longer into the normal world of human experience. They exist in H3 (hyperbolic 3D space). And this is in part why it is so extremely difficult to convey the subjective quality of these experiences. One needs to invoke mathematical notions that are unfamiliar to most people; and even then, when they do understand the math, the raw feeling of changing the damn geometry of your experience is still a lot weirder than you could ever anticipate.


Anybody else want to play hyperbolic soccer? Humans vs. Entities, the match of the eon!

Note: The original article goes into more depth

Now that you understand the gist of the original article, I encourage you to take a closer look at it, as it includes content that I didn’t touch in this ELI5 (or 12) summary. It provides a granular description of the 6 levels of DMT experience (Threshold, Chrysanthemum, Magic Eye, Waiting Room, Breakthrough, and Amnesia), many pictures to illustrate the various levels as well as the particular emergent geometries, and a theoretical discussion of the various algorithmic reductions that might explain how the hyperbolization of phenomenal space takes place based on combining a series of simpler effects together.

Qualia Computing at Consciousness Hacking (June 7th 2017)

I am delighted to announce that I will be presenting at Consciousness Hacking in San Francisco on 2017/6/7 (YMD notation).

Consciousness Hacking (CoHack) is an extremely awesome community that blends a genuine interest in benevolence, scientific rationality, experiential spirituality, self-experimentation, and holistic wellbeing together with an unceasing focus on consciousness. Truth be told, CohHack is one of the reasons why I love living in the Bay Area.

Here are the relevant event links: Eventbrite, FacebookMeetup.

And the event description:

What would happen if a bliss technology capable of inducing a constant MDMA-like state of consciousness with no negative side effects were available? What makes an experience good or bad? Is happiness a spiritual trick, or is spirituality a happiness trick?

At this month’s speaker presentation, Consciousness Hacking invites Data Science Engineer, Andrés Gómez Emilsson to discuss current research, including his own, concerning the measurement of bliss, how blissful brain states can be induced, and what implications this may have on quality of life and our relationship with the world around us.

Emilsson’s research aims to create a mathematical theory of the pleasure-pain axis that can take information about a person’s brain at a given point in time and return the approximate (or even true) level of happiness and suffering for that person. Emilsson will explore two dimensions that have been studied in affective neuroscience for decades:

  • Arousal: how much energy and activation a given emotion has
  • Valence: the “feel good or feel bad” dimension of emotion

If the purpose of life is to feel happy and to make others happy, then figuring out how valence is implemented in the brain may take us a long way in that direction. Current approaches to valence, while helpful, usually don’t address the core of the problem (ie. usually just measuring the symptoms of pleasure such as the neurotransmitters that trigger it, brain regions, positive reinforcement, etc. rather than getting at the experience of pleasure itself).

A real science of valence would not only be able to integrate and explain why the things people report as pleasurable are pleasant, it would also make a precise, empirically falsifiable hypothesis about whether arbitrary brain states will feel good or bad. This is what Emilsson aims to do.

You will take away:

  • An understanding about the current scientific consensus on the nature of happiness in the brain, and why it is incomplete
  • A philosophical case for both the feasibility and desirability of a world devoid of intense suffering
  • A new candidate mathematical formula that can be used to predict the psychological wellbeing of a brain at a given point in time
  • An argument for why bliss technology that puts us in a constant MDMA-like state of consciousness with no negative side effects is likely to become available within the next two to five decades
  • The opportunity to network with other people who are serious about figuring out the meaning of life through introspection and neuroscience

About our speaker:

Andrés Gómez Emilsson was born in México City in 1990. From an early age, he developed an interest in philosophy, mathematics, and science, leading him to compete nationally and internationally in Math and Science Olympiads. At 16, his main interest was mathematics, but after an unexpected “mystical experience”, he turned his attention to consciousness and the philosophical problems that it poses. He studied Symbolic Systems with an Artificial Intelligence concentration at Stanford, and later finished a masters in Computational Psychology at the same university. During his time at Stanford he co-founded the Stanford Transhumanist Association and became good friends with transhumanist philosopher David Pearce, taking on the flag of the Hedonistic Imperative (HI). In order to pursue the long-term goals of HI, his current primary intellectual interest is to reverse-engineer the functional, biochemical and/or quantum signatures of pure bliss.

He is currently working at a Natural Language Processing company in San Francisco, creating quantitative measures of employee happiness, productivity, and ethics at companies, with the long-term intent of creating a consciousness research institute that’s also a great place to work for (i.e. one in which employees are happy, productive, and ethical). In his free time he develops psychophysical tools to study the computational properties of consciousness.


6:30: Check in, snacks

6:45: Structured schmoozing

6:55: Event intro and meditation

7:00: Andrés Gómez Emilsson

7:50: Break

8:00: Break-out Sessions (small group discussion)

9:00: Break-out Recap

9:15: Closing meditation

About our venue:

ECO-SYSTM is a dynamic community of creative professionals, startups, and freelancers, founded on the idea that entertainment, creativity and business can come together to offer a truly unique work experience for Bay Area professionals. Check out membership plans here:


Principia Qualia: Part II – Valence

Extract from Principia Qualia (2016) by my colleague Michael E. Johnson (from Qualia Research Institute). This is intended to summarize the core ideas of chapter 2, which proposes a precise, testable, simple, and so far science-compatible theory of the fundamental nature of valence (also called hedonic tone or the pleasure-pain axis; what makes experiences feel good or bad).


VII. Three principles for a mathematical derivation of valence

We’ve covered a lot of ground with the above literature reviews, and synthesizing a new framework for understanding consciousness research. But we haven’t yet fulfilled the promise about valence made in Section II- to offer a rigorous, crisp, and relatively simple hypothesis about valence. This is the goal of Part II.

Drawing from the framework in Section VI, I offer three principles to frame this problem: ​

1. Qualia Formalism: for any given conscious experience, there exists- in principle- a mathematical object isomorphic to its phenomenology. This is a formal way of saying that consciousness is in principle quantifiable- much as electromagnetism, or the square root of nine is quantifiable. I.e. IIT’s goal, to generate such a mathematical object, is a valid one.

2. Qualia Structuralism: this mathematical object has a rich set of formal structures. Based on the regularities & invariances in phenomenology, it seems safe to say that qualia has a non-trivial amount of structure. It likely exhibits connectedness (i.e., it’s a unified whole, not the union of multiple disjoint sets), and compactness, and so we can speak of qualia as having a topology.

More speculatively, based on the following:

(a) IIT’s output format is data in a vector space,

(b) Modern physics models reality as a wave function within Hilbert Space, which has substantial structure,

(c) Components of phenomenology such as color behave as vectors (Feynman 1965), and

(d) Spatial awareness is explicitly geometric,

…I propose that Qualia space also likely satisfies the requirements of being a metric space, and we can speak of qualia as having a geometry.

Mathematical structures are important, since the more formal structures a mathematical object has, the more elegantly we can speak about patterns within it, and the closer our words can get to “carving reality at the joints”. ​

3. Valence Realism: valence is a crisp phenomenon of conscious states upon which we can apply a measure.

–> I.e. some experiences do feel holistically better than others, and (in principle) we can associate a value to this. Furthermore, to combine (2) and (3), this pleasantness could be encoded into the mathematical object isomorphic to the experience in an efficient way (we should look for a concise equation, not an infinitely-large lookup table for valence). […]


I believe my three principles are all necessary for a satisfying solution to valence (and the first two are necessary for any satisfying solution to consciousness):

Considering the inverses:

If Qualia Formalism is false, then consciousness is not quantifiable, and there exists no formal knowledge about consciousness to discover. But if the history of science is any guide, we don’t live in a universe where phenomena are intrinsically unquantifiable- rather, we just haven’t been able to crisply quantify consciousness yet.

If Qualia Structuralism is false and Qualia space has no meaningful structure to discover and generalize from, then most sorts of knowledge about qualia (such as which experiences feel better than others) will likely be forever beyond our empirical grasp. I.e., if Qualia space lacks structure, there will exist no elegant heuristics or principles for interpreting what a mathematical object isomorphic to a conscious experience means. But this doesn’t seem to match the story from affective neuroscience, nor from our everyday experience: we have plenty of evidence for patterns, regularities, and invariances in phenomenological experiences. Moreover, our informal, intuitive models for predicting our future qualia are generally very good. This implies our brains have figured out some simple rules-of-thumb for how qualia is structured, and so qualia does have substantial mathematical structure, even if our formal models lag behind.

If Valence Realism is false, then we really can’t say very much about ethics, normativity, or valence with any confidence, ever. But this seems to violate the revealed preferences of the vast majority of people: we sure behave as if some experiences are objectively superior to others, at arbitrarily-fine levels of distinction. It may be very difficult to put an objective valence on a given experience, but in practice we don’t behave as if this valence doesn’t exist.


VIII. Distinctions in qualia: charting the explanation space for valence

Sections II-III made the claim that we need a bottom-up quantitative theory like IIT in order to successfully reverse-engineer valence, Section VI suggested some core problems & issues theories like IIT will need to address, and Section VII proposed three principles for interpreting IIT-style output:

  1. We should think of qualia as having a mathematical representation,
  2. This mathematical representation has a topology and probably a geometry, and perhaps more structure, and
  3. Valence is real; some things do feel better than others, and we should try to explain why in terms of qualia’s mathematical representation.

But what does this get us? Specifically, how does assuming these three things get us any closer to solving valence if we don’t have an actual, validated dataset (“data structure isomorphic to the phenomenology”) from *any* system, much less a real brain?

It actually helps a surprising amount, since an isomorphism between a structured (e.g., topological, geometric) space and qualia implies that any clean or useful distinction we can make in one realm automatically applies in the other realm as well. And if we can explore what kinds of distinctions in qualia we can make, we can start to chart the explanation space for valence (what ‘kind’ of answer it will be).

I propose the following four distinctions which depend on only a very small amount of mathematical structure inherent in qualia space, which should apply equally to qualia and to qualia’s mathematical representation:

  1. Global vs local
  2. Simple vs complex
  3. Atomic vs composite
  4. Intuitively important vs intuitively trivial


Takeaways: this section has suggested that we can get surprising mileage out of the hypothesis that there will exist a geometric data structure isomorphic to the phenomenology of a system, since if we can make a distinction in one domain (math or qualia), it will carry over into the other domain ‘for free’. Given this, I put forth the hypothesis that valence may plausibly be a simple, global, atomic, and intuitively important property of both qualia and its mathematical representation.

IX. Summary of heuristics for reverse-engineering the pattern for valence

Reverse-engineering the precise mathematical property that corresponds to valence may seem like finding a needle in a haystack, but I propose that it may be easier than it appears. Broadly speaking, I see six heuristics for zeroing in on valence:

A. Structural distinctions in Qualia space (Section VIII);

B. Empirical hints from affective neuroscience (Section I);

C. A priori hints from phenomenology;

D. Empirical hints from neurocomputational syntax;

E. The Non-adaptedness Principle;

F. Common patterns across physical formalisms (lessons from physics). None of these heuristics determine the answer, but in aggregate they dramatically reduce the search space.

IX.A: Structural distinctions in Qualia space (Section VIII):

In the previous section, we noted that the following distinctions about qualia can be made: Global vs local; Simple vs complex; Atomic vs composite; Intuitively important vs intuitively trivial. Valence plausibly corresponds to a global, simple, atomic, and intuitively important mathematical property.


Music is surprisingly pleasurable; auditory dissonance is surprisingly unpleasant. Clearly, music has many adaptive signaling & social bonding aspects (Storr 1992; Mcdermott and Hauser 2005)- yet if we subtract everything that could be considered signaling or social bonding (e.g., lyrics, performative aspects, social bonding & enjoyment), we’re still left with something very emotionally powerful. However, this pleasantness can vanish abruptly- and even reverse– if dissonance is added.

Much more could be said here, but a few of the more interesting data points are:

  1. Pleasurable music tends to involve elegant structure when represented geometrically (Tymoczko 2006);
  2. Non-human animals don’t seem to find human music pleasant (with some exceptions), but with knowledge of what pitch range and tempo their auditory systems are optimized to pay attention to, we’ve been able to adapt human music to get animals to prefer it over silence (Snowdon and Teie 2010).
  3. Results suggest that consonance is a primary factor in which sounds are pleasant vs unpleasant in 2- and 4-month-old infants (Trainor, Tsang, and Cheung 2002).
  4. Hearing two of our favorite songs at once doesn’t feel better than just one; instead, it feels significantly worse.

More generally, it feels like music is a particularly interesting case study by which to pick apart the information-theoretic aspects of valence, and it seems plausible that evolution may have piggybacked on some fundamental law of qualia to produce the human preference for music. This should be most obscured with genres of music which focus on lyrics, social proof & social cohesion (e.g., pop music), and performative aspects, and clearest with genres of music which avoid these things (e.g., certain genres of classical music).


X. A simple hypothesis about valence

To recap, the general heuristic from Section VIII was that valence may plausibly correspond to a simple, atomic, global, and intuitively important geometric property of a data structure isomorphic to phenomenology. The specific heuristics from Section IX surveyed hints from a priori phenomenology, hints from what we know of the brain’s computational syntax, introduced the Non-adaptedness Principle, and noted the unreasonable effectiveness of beautiful mathematics in physics to suggest that the specific geometric property corresponding to pleasure should be something that involves some sort of mathematically-interesting patterning, regularity, efficiency, elegance, and/or harmony.

We don’t have enough information to formally deduce which mathematical property these constraints indicate, yet in aggregate these constraints hugely reduce the search space, and also substantially point toward the following:

Given a mathematical object isomorphic to the qualia of a system, the mathematical property which corresponds to how pleasant it is to be that system is that object’s symmetry.


XI. Testing this hypothesis today

In a perfect world, we could plug many peoples’ real-world IIT-style datasets into a symmetry detection algorithm and see if this “Symmetry in the Topology of Phenomenology” (SiToP) theory of valence successfully predicted their self-reported valences.

Unfortunately, we’re a long way from having the theory and data to do that.

But if we make two fairly modest assumptions, I think we should be able to perform some reasonable, simple, and elegant tests on this hypothesis now. The two assumptions are:

  1. We can probably assume that symmetry/pleasure is a more-or-less fractal property: i.e., it’ll be evident on basically all locations and scales of our data structure, and so it should be obvious even with imperfect measurements. Likewise, symmetry in one part of the brain will imply symmetry elsewhere, so we may only need to measure it in a small section that need not be directly contributing to consciousness.
  2. We can probably assume that symmetry in connectome-level brain networks/activity will roughly imply symmetry in the mathematical-object-isomorphic-to-phenomenology (the symmetry that ‘matters’ for valence), and vice-versa. I.e., we need not worry too much about the exact ‘flavor’ of symmetry we’re measuring.

So- given these assumptions, I see three ways to test our hypothesis:

1. More pleasurable brain states should be more compressible (all else being equal).

Symmetry implies compressibility, and so if we can measure the compressibility of a brain state in some sort of broad-stroke fashion while controlling for degree of consciousness, this should be a fairly good proxy for how pleasant that brain state is.


2. Highly consonant/harmonious/symmetric patterns injected directly into the brain should feel dramatically better than similar but dissonant patterns.

Consonance in audio signals generally produces positive valence; dissonance (e.g., nails-on-a-chalkboard) reliably produces negative valence. This obviously follows from our hypothesis, but it’s also obviously true, so we can’t use it as a novel prediction. But if we take the general idea and apply it to unusual ways of ‘injecting’ a signal into the brain, we should be able to make predictions that are (1) novel, and (2) practically useful.

TMS is generally used to disrupt brain functions by oscillating a strong magnetic field over a specific region to make those neurons fire chaotically. But if we used it on a lower-powered, rhythmic setting to ‘inject’ a symmetric/consonant pattern directly into parts of the brain involved directly with consciousness, the result should produce good feeling- or at least, much better valence than a similar dissonant pattern.

Our specific prediction: direct, low-power, rhythmic stimulation (via TMS) of the thalamus at harmonic frequencies (e.g., @1hz+2hz+4hz+6hz+8hz+12hz+16hz+24hz+36hz+48hz+72hz+96hz+148hz) should feel significantly more pleasant than similar stimulation at dissonant frequencies (e.g., @1.01hz+2.01hz+3.98hz+6.02hz+7.99hz+12.03hz+16.01hz+24.02hz+35.97hz+48.05hz+72.04hz+95.94hz+ 147.93hz).


3. More consonant vagus nerve stimulation (VNS) should feel better than dissonant VNS.

The above harmonics-based TMS method would be a ‘pure’ test of the ‘Symmetry in the Topology of Phenomenology’ (SiToP) hypothesis. It may rely on developing custom hardware and is also well outside of my research budget.

However, a promising alternative method to test this is with consumer-grade vagus nerve stimulation (VNS) technology. Nervana Systems has an in-ear device which stimulates the Vagus nerve with rhythmic electrical pulses as it winds its way past the left ear canal. The stimulation is synchronized with either user-supplied music or ambient sound. This synchronization is done, according to the company, in order to mask any discomfort associated with the electrical stimulation. The company says their system works by “electronically signal[ing] the Vagus nerve which in turn stimulates the release of neurotransmitters in the brain that enhance mood.”

This explanation isn’t very satisfying, since it merely punts the question of why these neurotransmitters enhance mood, but their approach seems to work– and based on the symmetry/harmony hypothesis we can say at least something about why: effectively, they’ve somewhat accidentally built a synchronized bimodal approach (coordinated combination of music+VNS) for inducing harmony/symmetry in the brain. This is certainly not the only component of how this VNS system functions, since the parasympathetic nervous system is both complex and powerful by itself, but it could be an important component.

Based on our assumptions about what valence is, we can make a hierarchy of predictions:

  1. Harmonious music + synchronized VNS should feel the best;
  2. Harmonious music + placebo VNS (unsynchronized, simple pattern of stimulation) should feel less pleasant than (1);
  3. Harmonious music + non-synchronized VNS (stimulation that is synchronized to a different kind of music) should feel less pleasant than (1);
  4. Harmonious music + dissonant VNS (stimulation with a pattern which scores low on consonance measures such as (Chon 2008) should feel worse than (2) and (3));
  5. Dissonant auditory noise + non-synchronized, dissonant VNS should feel pretty awful.

We can also predict that if a bimodal approach for inducing harmony/symmetry in the brain is better than a single modality, a trimodal or quadrimodal approach may be even more effective. E.g., we should consider testing the addition of synchronized rhythmic tactile stimulation and symmetry-centric music visualizations. A key question here is whether adding stimulation modalities would lead to diminishing or synergistic/accelerating returns.

How Every Fairy Tale Should End

“And even though the princess defeated the dragon and married the prince at the end of the story, the truth is that the hedonic treadmill and the 7-year itch eventually caught up to them and they were not able to ‘live happily ever after’.

“Thankfully, the princess got really interested in philosophy of mind and worked really hard on developing a theory of valence in order to ‘sabotage the mill’ of affective ups and downs, so to speak. After 10 years of hard work, three book-length series of blog posts, a well founded team of 17 rational psychonauts, and hundreds of experiments involving psychedelics and brain computer interfaces, at last the princess was able to create a portable device capable of measuring what amounts to a reasonable proxy for valence at an individual level in sub-second timescales, which over time enabled people to have reliable and sustainable control over the temporal dynamics of valence and arousal.

“Later on the prince developed a Moloch-aware and Singleton-proof economy of information about the state-space of consciousness, and thus kick-started the era of ethical wireheads; the world became a true fariy tale… a wondrous universe of enigmatic -but always blissful- varieties of ineffable qualia. After this came to pass, one could truly and sincerely say that the prince and the princess both became (functionally and phenomenally) happily ever after. The End.”

OTC remedies for RLS

by Anonymous


As many as 10% of people may be suffering from a mild form of Restless Legs Syndrome, and 2 to 5% may be experiencing a moderate to severe form of it (NIH). Unfortunately, the phenomenal character of this affliction is usually hard to describe, and for that reason sufferers of the condition are frequently dismissed. Whereas prescription medications can be effective at treating the acute effects of this problem (specifically opioidergics, dopaminergics, and anticonvulsants), a life-long solution has yet to be found. What is less well-known is the fact that there are many over-the-counter supplements that can help with this condition in a real and substantial way. For those who do not want to go the prescription route, here is a list of OTC supplements that can be helpful.


The list is organized into three buckets, from most effective to least effective. I also include two “negative buckets” which are compounds that worsen the symptoms, which you may not be aware of. For the most part, the drawback of chemicals in bucket 3 is that they are addictive and work “too well” (if taken regularly and later discontinued, the RLS symptoms may come back in a worse form). Bucket 2 chemicals are effective at reducing the core symptoms of the syndrome but usually do not make the feeling of restlessness go away entirely. Drugs in bucket 1 can help mask the symptoms, but do not address them directly (so they are only helpful to people who have rather mild versions of the syndrome). Bucket -1 includes things that worsen the overall restlessness but do not seem to interact with the specific feeling of restlessness characteristic of RLS. And finally, bucket -2 literally amplifies the exact feeling that characterizes RLS. Note that if you take such compounds (from bucket -1 and -2) in the morning, by the evening you may experience a sort of relief from the come-down of these drugs.


In practice, I would suggest using bucket 3 compounds as little as possible, but have them around in case of a very bad night. Instead, cycle through several bucket 2 and 1 drugs and experiment with combining them. Your aim is to develop a treatment that works for you that minimizes receptor down-regulation and that does not stop working over time.


Bucket 3:

  • Tianeptine Sulfate (10-30 mg; addictive)
  • Kratom (1 to 3 grams; addictive)
  • Ethylphenidate (.5 to 3mg; addictive)


Bucket 2:

  • DXM (10 to 30mg)
  • Niacinamide (300mg to 1 gram)
  • L-Tyrosine (100 to 600mg)
  • Agmatine (20mg to 1 gram, depending on personal response curve)
  • Indica Marijuana (specially edibles of high-CBD strains; even pure CBD can work, though tiny amounts of THC seem to amplify the RLS-killing effect of CBD)
  • Rhodiola Rosea (about half a tablet from this brand)


Bucket 1:

  • Magnesium supplements (depends on the delivery, but, e.g. for Citrate 500mg)
  • Iron (only if iron deficient)
  • Melatonin (.05 to 3mg, depending on personal dose response curve)
  • L-Theanine (200mg to 1 gram)
  • Aspirin (100-300 mg), Ibuprofen (100-500mg), Paracetamol (100-500mg)
  • Adrafinil (20-50mg; paradoxical sleep-inducing effect at this dose range)
  • Valerian root (varies by extract)
  • Ashwagandha (300mg to 1gram; withanolides in the 5-20mg range)
  • Phenibut (100-500mg; addictive)


Bucket -1:

  • Cholinergic nootropics (e.g. piracetam, aniracetam, coluracetam)
  • Alcohol
  • Caffeine
  • Pure THC marijuana strains
  • Psychedelics (in the form of unscheduled Research Chemicals)
  • Bromantane
  • 5HTP


Bucket -2: