12+ Reasons to Donate to ClusterFree

Why cluster headache mitigation should become your #1 effective giving priority this Season: impactful, novel, very alive, and with plausible fast results!

By Andrés Gómez Emilsson, ClusterFree Co-Founder & Member of Advisory Board

TL;DR: To motivate action and feel genuine internal alignment around a decision, sometimes we need to see it from many different angles. Even when a single reason should be enough, we need to motivate our entire internal coalition of subagents! Hence, all of these reasons to support ClusterFree in its mission:

Summary of the 12+ Reasons to Support This Cause

  1. Watch real people rapidly improveVideo testimonials of torture stopping in minutes
  2. Logarithmic scale of impact – Helping someone with this condition is potentially one of the highest-leverage interventions anyone can do as a gift to someone’s life
  3. Insurance against illegible suffering – Building a world that takes invisible pain seriously, including your own in the future! (crossing fingers you never experience such things!)
  4. Proof-of-concept for valence-first cost-effectiveness – This illustrates the corner cases where QALYs/DALYs fail catastrophically
  5. Intellectual coalition – Scott Alexander, Peter Singer, Anders Sandberg, Robin Carhart-Harris, etc. have seen the evidence and are convinced this is real
  6. Schelling point for suffering reduction – Network effects for future high-impact work, attracting genuine talent to focus on deep suffering reduction is its own value proposition
  7. It’s a strike against medical paternalism – Informed consent for known therapies, even when not officially approved, when it comes to extreme suffering, should always be an option on the table
  8. Actually tractable – Success looks like a 3-5 year timeline with a clear theory of change
  9. Speed cashes out in suffering prevented – 70,000 people in extreme agony right now, every day of delay matters greatly
  10. Works as an accelerant for an existing movement – Adding coordination to grassroots momentum that’s already underway (giving the psychedelic renaissance wings!)
  11. Psychospiritual merit (if you believe in “karma”) – Buddhist texts specifically highlight headache relief, “immeasurable merit” in store for you and your loved ones if you decide to help with clean intentions
  12. Bodhisattva vision – Practice looking into darkness without flinching
  13. Bonus – I’ll stop talking about Cluster Headaches in Qualia Computing!: Fund it so I can get back to core QRI research

Introduction: Why Multiple Reasons Actually Matter

In principle, deciding where to donate should be straightforward: calculate expected value, fund the highest-impact opportunity, done. In practice, we’re coalitions of subagents with different reward architectures, time horizons, epistemics, and thresholds for action.

At a neurobiological level, motivation doesn’t work the way we pretend. It’s not about “willpower” or “being convinced by good arguments.” Different brain regions make “bids” to the basal ganglia, using dopamine as the currency. Whichever region makes the highest bid gets to determine the next action. Scott Alexander explains this in Toward A Bayesian Theory Of Willpower (2021). What we call “motivation”, within this framework, is just whichever subsystem’s bid is currently winning. Whether the details are right or not, I think this tracks how I see people behave.

If you want to trigger high-effort action, giving just one reason may not be enough. That only raises one bid. Layer multiple kinds of reasons (emotional, moral, social, self-interest, narrative, identity-based), and you multiply the bidders in your internal parliament. Scott uses stimulants as an example: they “increase dopamine in the frontal cortex… This makes… conscious processes telling you to (e.g.) do your homework… artificially… more convincing… so you do your homework.”

Look, I’m being straightforwardly manipulative here. Giving you twelve reasons instead of one is designed to activate more of your subagents. But it’s prosocially manipulative – to help you integrate a truth you might already intellectually accept but haven’t acted upon yet. The bullet point approach can be misused when it obfuscates (think laundry list of complaints when there’s really just one big issue), so let me be meta-transparent: I genuinely believe ClusterFree is extremely high-impact, and I’m deliberately structuring this to get past your action threshold. If any one or even several of these reasons feel less convincing to you, ignore them. The robust core case stands on its own.

There’s also the threshold problem. In Guyenet On Motivation (2018), Scott discusses how higher dopamine makes the brain more likely to initiate any behavior. When dopamine is low, even strong reasons may not overcome inertia. Increased dopamine “makes the basal ganglia more sensitive to incoming bids, lowering the threshold for activating movements.” Sometimes what’s needed isn’t better arguments but enough energetic activation to allow any reason at all to push action over the threshold. Which is why you should read this while high on LSD and/or Adderall fully rested and energized.

Naturally, this connects to annealing. At QRI, we think of belief updating as requiring an energetic process. It’s not enough to know something matters; you need metabolic resources to actually integrate that knowledge and reconfigure your behavior accordingly. The REBUS (RElaxed Beliefs Under pSychedelics) framework applies here: people intellectually understand that cluster headaches are astronomically bad, that preventing them is extraordinarily high-leverage, and that this is one of the most intense forms of suffering you can and should urgently address. Yet this knowledge may remain compartmentalized and inert, unable to meaningfully shape action, resembling other “ongoing moral catastrophes” by which future generations may judge our society.

What breaks through? Multiple simultaneous channels of evidence that together cross energy thresholds. Emotional resonance. Social proof. Narrative coherence. Personal connection. These aren’t redundant: they join together as a gestalt that pushes forward the energetic budget needed for actual system-wide updating.

So here are the twelve reasons to support ClusterFree. Not because you need all twelve to “get it” intellectually, but because different reasons will activate different coalitions in your brain.

And if you’re not in a position to donate but still want to help – please keep reading. There are many high-impact ways to contribute at the end!


1. You Can Actually See People Rapidly Improving

Most charity is abstract. You send money into a statistical void and trust the meta-analyses.

With ClusterFree, you can watch video testimonials of actual people describing how psilocybin or DMT stopped “the worst pain imaginable” in minutes. The person who was screaming, punching walls, and contemplating suicide is suddenly calm, coherent, and alive again.

Watching someone’s face change like that hits you differently than reading a cost-effectiveness analysis. Your brain gets direct evidence of the state change. You see the suffering stop.

And strategically, patient testimonials are how this actually works. Raw video testimonials of “this stopped my torture” create demand that no institutional gatekeeping can fully suppress. People are already using this in advocacy. We’re just collecting the stories systematically and making them impossible to ignore. One major medical center sees enough of these, runs a supervised protocol, publishes clean results, and every other institution’s liability calculation flips.


2. On the Logarithmic Scale of Helping Another Human, This Is Unfathomably High

Preventing cluster headaches for life is plausibly one of the single largest “good deeds” a human can do for another human being. Yes, this is grandiose. But if something big IS true and you know it, pretending it’s not to avoid looking grandiose is fake humility that damages the cause.

Cluster headaches are called “suicide headaches” because the pain is so extreme that people actively contemplate ending their lives during attacks. Patients report “drilling through my eye socket,” “being stabbed in the brain,” “pain so bad I can’t think, can’t speak, can’t do anything but scream.”

Here’s a rough intuitive sketch of what the logarithmic scale of helping another person might look like (this isn’t rigorous math – it’s an illustration of what’s likely the case, directionally right[1]):

  • 10^0: holding a door open
  • 10^1: gifting a pen
  • 10^2: introducing them to someone useful
  • 10^3: helping them move places
  • 10^4: catching a major work or family mistake before it ruins their week
  • 10^5: teaching them a compounding skill (meditation, programming, emotional regulation)
  • 10^6: funding their higher education, changing their entire socioeconomic trajectory
  • 10^7: helping them escape a pathological family system
  • 10^8: preventing them from falling into a cult, deep addiction, or abusive relationship
  • 10^9: curing a chronic condition like treatment-resistant generalized anxiety disorder (GAD)
  • 10^10: saving their life while preserving psychological integrity
  • 10^11: giving them a permanent upward shift in baseline wellbeing and quality of consciousness, such as advanced contemplative practice can do over the course of decades
  • 10^12: preventing cluster headaches for life

Why 10^12? A single cluster headache attack is plausibly in the 10^9 to 10^11 range of negative valence – orders of magnitude worse than migraine, worse than childbirth, worse than even torture. A typical patient experiences thousands of these across their lifetime. The multiplication is straightforward.

We’ve done empirical work quantifying cluster headache intensity using patient self-reports, cross-condition comparisons, suicide attempt rates, and other methods. Full details in our EA Forum posts (Quantifying the Global Burden of Extreme Pain from Cluster Headaches, Logarithmic Scales of Pleasure and Pain) and our Nature: HSSC paper.

The theory of change for the open letters on ClusterFree is straightforward:

Patient testimonials – Raw evidence that DMT/psilocybin (even at subhallucinogenic doses) works for a large fraction of sufferers, spreading organically through desperate communities. This is already happening underground.

Reputation-Amplified Legitimization – Get enough credible voices (clinicians, researchers, policy experts) publicly acknowledging both the crisis and the evidence. We already have 800+ signatures, many from extremely prestigious people. This shifts what’s discussable. Journalists cover it differently. Clinicians stop whispering with fear of judgment and start preparing, even if quietly at first (I’m already seeing signs of this in some groups).

Clinical cascade – One major medical center runs a supervised protocol, publishes clean results, and every other institution’s liability math inverts. You don’t need consensus. You need one proof point, and the dominoes fall.


3. It’s Insurance Against Your Own Extreme Suffering Being Dismissed

Cluster headaches are invisible. No blood, no broken bones, nothing on medical imaging. Just someone screaming, rocking, punching walls while doctors tell them to “try reducing stress”, “have you considered yoga?”, or “maybe try an Ibuprofen?”.

This is what illegible suffering looks like. People don’t believe you. Institutions can’t help you. You’re trapped in a cage of agony that no one else can see.

Supporting work on illegible suffering means supporting the principle that intense subjective experience matters even when it can’t be measured easily. By supporting ClusterFree, you’re building the world where, if you ever wind up in incomprehensible pain (chronic illness, treatment-resistant conditions, novel syndromes medicine doesn’t understand yet, a hard-to-communicate and hard-to-alleviate pocket of deep biopsychosocial suffering), people will actually take it seriously. Where “I am in agony, and this helps” is treated as highly important data, the existence is safer and more dignified.

Medical, institutional, and social gatekeeping kills people. It traps them in years of unnecessary suffering because the safe and affordable tools that work aren’t “approved” yet. By supporting the patient-driven, evidence-based access to what actually helps, you’re contributing to practical moral betterment and making the world safer for everyone who might need it. Including you.


4. It’s a Proof-of-Concept for Valence-First Cost-Effectiveness

Most effective altruism uses QALYs (Quality-Adjusted Life Years) or DALYs (Disability-Adjusted Life Years) to evaluate interventions. These metrics have a major limitation: they systematically underweight extreme suffering. A QALY-based analysis of cluster headaches captures some utility loss but misses orders of magnitude of suffering because attacks are brief and non-lethal – even though they’re torture-level and recurring. The frequency distribution is also extremely skewed (some sufferers have 10+ attacks daily), which standard health economics frameworks struggle to properly account for.

ClusterFree evaluates interventions based on how bad things actually feel and what their actual prevalence is – not through the lens of reduced life expectancy or economic burden: “How much suffering are we preventing when measured by its actual intensity?”.

We’ve quantified cluster headache intensity and prevalence using patient self-reports, cross-condition comparisons, suicide attempt rates, and other complementary empirical methods. The result is clear: cluster headaches score astronomically high. This is why preventing them matters so much more than conventional metrics would suggest.

If you want a future where we optimize for the real reduction of suffering instead of metrics that structurally and systematically ignore its most intense forms, ClusterFree is the seed. We’re showing how you can make rigorous, evidence-based decisions by taking the actual experience seriously. This serves as a template for charity evaluation and ethical triage (not necessarily to replace current Effective Altruism methods, but to add a _critical_ missing evaluation angle to the ensemble model for how to help most effectively). 


5. You’ll Be in the Company of Intellectual Giants

Scott Alexander supports this. Anders Sandberg supports this. Peter Singer supports this. These are thought leaders with decades of track records in rigorous, scout-mindset thinking about doing good. They don’t endorse lightly. They’ve looked at the testimonials, the statistics and trends, the theory of change, and said: this is real.

If you trust their epistemics even a little, their endorsement is strong Bayesian evidence. These aren’t people chasing trends or optimizing for social approval.

And beyond the rationalist/EA sphere? Robin Carhart-Harris supports this – one of the leading psychedelic neuroscientists in the world. Shamil Chandaria supports this – doing serious work on meditation, predictive processing, and contemplative neuroscience. Christopher H. Gottschalk supports this – a neurologist who actually treats cluster headache patients and knows firsthand how devastating they are.

EA thinkers, psychedelic researchers, clinical neurologists, contemplative scientists – they’re all saying the same thing. That doesn’t happen often.

You get to join this coalition early. While it’s still underrecognized. While it requires actually engaging with the arguments instead of following the consensus. While supporting it means skin in the game.

Supporting ClusterFree now signals good taste (you can spot high-impact opportunities before they’re obvious), high reasoning capacity (you can evaluate complex arguments across disciplines), genuine compassion (you care about actual suffering, not just legible causes), and epistemic independence (you can disagree with the consensus when the evidence demands it).

When this becomes mainstream (and it will), you were there first.


6. It’s Creating a Schelling Point for Serious Suffering-Reduction Work

ClusterFree is reducing the coordination costs and bringing together people who can spot neglected pools of immense value early on.

Researchers who care about phenomenological intensity. Clinicians frustrated with institutional gatekeeping who want evidence-based psychedelic medicine. Policymakers who understand regulatory strategy. Patients with direct experience who want to help others. All working on the same thing with a clear theory of change.

Many causes tend to be either too vague (“reduce suffering”) or too narrow (“fund this one study”). ClusterFree hits the sweet spot – it is specific enough to be actionable, broad enough to matter at scale, and legible enough to attract serious supporters.

The network effects compound. When the next high-leverage suffering reduction project comes along, there’s already a group of competent people who know how to execute. The people showing up now will co-build what comes next. Rather than funding one project, you’re seeding a network that keeps generating high-impact work.


7. It’s a Strike Against Paternalistic Control Over Suffering Relief

Right now, people with cluster headaches are told they cannot officially access psilocybin or DMT – the interventions that consistently, rapidly, and reliably work for a large fraction of sufferers – because the institutions have decided they’re not allowed to make that informed choice. Even when they’re screaming in agony. Even when they’re suicidal. Even when nothing else helps.

Medical paternalism is at its most cruel when patients hear: “We know you’re suffering, but you can’t have the effective, affordable, and safe-to-manage thing that stops your agony, because we haven’t finished the proper studies yet, and/or because of the system’s inertia.” Never mind that converging evidence shows it works. Never mind that patients are already using it skilfully and reporting dramatic relief. Never mind that the risk profile is more than worth it given the suffering prevented.

ClusterFree, with your support, is building the legal, scientific, and social infrastructure to challenge that amoral status quo. We pave the way for informed consent, supervised access, and letting people make rational decisions about their own unbearable pain.

If you value bodily autonomy, participatory medicine, and the right to pursue relief from extreme suffering, this is the fight. And it’s winnable thanks to multiple predictors of success. 


8. This Is Actually Tractable

Most extreme suffering feels impossibly hard to address. Oftentimes, contemplating extreme suffering causes a sense of helplessness. It’s too big, too entrenched, and too complex. You can care deeply and still feel like there is nothing you can meaningfully do about it.

Cluster headaches are different. We have video testimonials. We have 800+ signatures from people with institutional power. We have a clear mechanism – psilocybin/DMT abort attacks rapidly and safely. We have willing clinicians ready to run supervised protocols. We have patient demand already creating the underground adoption.

The main barrier is coordination and legitimacy-building. That’s where ClusterFree steps in: we close the gap between common knowledge and the rollout of systemic solutions. 

And we’re going beyond mere advocacy. Bob Wold of ClusterBusters calls DMT a “breakthrough therapy” for its near-instant pain relief; we’re working to understand why it works, so we can foster next best steps. Our research includes exploring legal, non-hallucinogenic (or only mildly hallucinogenic) alternatives like 5-MeO-DALT, which one patient discovered in Shulgin’s TIHKAL and used to successfully treat 46 cluster headache patients. Developing targeted therapies based on understanding the mechanisms and testing new approaches translates into accessibility and effectiveness.

We (admittedly optimistically) believe this is doable within 3 to 5 years of focused and effective execution: build the coalition, get one major medical center to publish clean results, and watch the common knowledge cascade. Meanwhile, we’re already developing better treatments with maximally broad legal adoption.

Most things that matter this much take decades… or never even happen. This one is actually within reach.


9. Every Month of Delay Means Unnecessary Pits of Suffering

Right now, while you’re reading this, ~70,000 people are experiencing a cluster headache attack. More will start in the next few minutes. And more after that, like a global wave of agonizing pain.

Roughly 3 million people worldwide have cluster headaches in any given year. Many experience attacks daily or multiple times per week during the cluster periods. We estimate that globally, cluster headache patients spend approximately 70,670 person-years per year in pain, with about 8,570 person-years (about 3.1 million person-days) spent at extreme pain levels (≥9/10).

The math is brutal: with every month of delay, patients undergo millions of preventable torture-level attacks. While other cause areas and interventions may warrant dilemmas of donating now or later, the case of ClusterFree is urgently clear – donate now, and we will do our best at bringing unimaginable counterfactual relief to millions in 2026-2027. 

Our model is designed for speed – we are not waiting for perfect RCTs, commercial products, or stable institutional consensus. We are building the strategic legitimacy cascade that lets institutions act on what we already know.

The suffering is happening right now. The effective solution exists right now. We know how to connect the dots, and the only question is how fast we can do so.


10. ClusterFree Is Accelerating an Already Developing Movement

ClusterBusters has been doing heroic work for years, building community, sharing information, and giving people hope. The psychedelic renaissance has been shifting cultural and scientific attitudes. Various researchers and advocates have been pushing this forward through different channels.

ClusterFree adds a specific piece: demonstrating that this is a winnable fight right now.

We bring:

  • An explicit theory of change (testimonials lead to reputation-amplified legitimization, which leads to clinical cascade);
  • 800+ signatures from outstanding individuals, many with institutional power and cultural influence;
  • A straightforward narrative: “this is effective, safe, and urgent, and we can scale this legally” – and we’re not afraid to signal DMT as especially promising (due to its extremely fast pain relief profile when “vaped” at the onset of an attack);
  • Coordination infrastructure that connects patients, clinicians, researchers, and funders around a shared goal; and
  • A global but local-context-sensitive approach in both coverage and mindset: while ClusterBusters focuses on the U.S. and UK, we’re building parallel advocacy tracks across multiple jurisdictions (Canada, Europe, Latin America, etc.) to build the missing capacity.

This strategy acts synergistically with other approaches, de-risking them rather than obstructing them. When a major medical center decides to run a supervised protocol, they will do it in an environment where 800+ credible voices (as of December 13th 2025) have already confirmed that this is real, this matters, and the research must take place as soon as possible.

Our strategy is being developed and executed by uniquely talented individuals with a strong track record. Alfredo Parra leads the organization – he is exceptional at navigating the interface between institutions, has 7+ years of nonprofit management experience, and is provingly extremely conscientious and high-integrity (don’t take my word for it – look at all the work). The team and the community that seeded it concentrate people who simultaneously understand the importance of suffering reduction, psychedelic phenomenology, regulatory strategy, and movement building. They both care about the deep structure of consciousness and aren’t swayed by common narratives. This is a rare comparative advantage, and in our view, proves an excellent fit to push this cause forward.

The fruitful work has been happening already. Where we step in is providing leverage at a specific bottleneck: making the path to legitimacy visible and coordinated.


11. If You Take “Karma” Seriously, Look at What the Texts Say About Headache Relief

In the Bodhicaryāvatāra, Śāntideva teaches that “immeasurable merit” arises even from the simple thought: “Let me dispel the headaches of beings.” The tradition treats this literally. Not metaphorically. Relieving sharp, overwhelming pain generates outsized karmic effects because it interrupts some of the most intense forms of duḥkha in the human realm.

Why headaches specifically? Because they were considered the archetype of piercing, mind-breaking pain in the classical world. Cluster headaches exceed even that ancient benchmark. They represent some of the most unbearable moments a human mind can experience.

The logic of meritorious karmic logic is clear: if intention aligned with the relief of severe suffering produces merit that scales with the intensity of dukkha relieved, then work that prevents torture-level pain for thousands of people is not ordinary charity but a high-density, boutique, ultra-rare karmic investment.

For practitioners of the Bodhisattva path, karma constitutes a feedback loop shaping future clarity, opportunity, and awakening. Helping beings escape states of extreme pain is singled out across the Mahāyāna as one of the fastest ways to accumulate merit and purify obscurations.

If even contemplating the wish to relieve a single headache creates immeasurable merit, then actively supporting work that may end this class of suffering at scale plants karmic seeds that ripple across lifetimes.

Even if you hold a weak, naturalized version of karma (something like “intentions to help tend to produce good outcomes proportional to the good intended”), the efficiency here is absurdly high. Instead of helping someone have a slightly better day, you’re preventing thousands of hours of above-torture-level pain per person.

And what if you don’t believe in karma at all? The consequentialist case is still clear. You’re preventing, say, ~10^12 units of negative valence per person.


12. You Get the Bodhisattva-Tier Vision

Most people, when they look into the true darkness of suffering (the worst pain imaginable, sustained for hours, recurring for decades), recoil. They look away. They rationalize (“someone else will handle it”), they cope (“well, suffering is just part of life”), and freeze (“I can’t do anything about this anyway”).

Such reactions are understandable given the limits of our agency and the scope of the challenge. Luckily, there’s another response possible and available today:

You see it, and you roll up your sleeves. Where others flinch or cope, you take intentional action.

That capacity to clearly perceive the worst of what’s real and respond with competence, care, direction, and focus – rather than despair, avoidance, denial, or freezing – is a rare gem. It separates people who talk about compassion from people who enact it. The “Bodhisattva move” is: “I see the suffering. I will not turn away. I will do what needs to be done.”

Supporting ClusterFree strengthens that moral muscle. It’s a practice for the kind of person you may want to be: someone who can look into the darkest abyss and respond with pragmatism, not platitudes.


And a bonus reason for Qualia Computing readers…

So I Can Stop Talking About Cluster Headaches in Qualia Computing

Look, I very deeply care about this work, and this is why ClusterFree needs to claim its own space. QRI has a complementary mission to fulfill – studying and utilizing coupling kernels, topological approaches to the boundary problem, neural annealing frameworks, and the deep structure of valence.

The more ClusterFree is funded and self-sufficient, the more I can get back to the core theoretical work for which I’m best suited. Which, by the way, is exactly how we identify the next high-leverage suffering reduction opportunities!.

If you want me to shut up about cluster headaches and get back to talking for hours about beam-splitter holography and DMT phenomenology, the fastest way to make that happen is to generously fund ClusterFree.

You’re welcome.


What We’re Specifically Asking For

ClusterFree is currently a two-person operation: Alfredo leading the day-to-day execution (coalition building, clinical coordination, policy navigation, the 800+ signature campaign), and me providing strategic direction, research frameworks, writeups like this one, and QRI infrastructure. The initial donations will let us hire additional top talent to manage critical workstreams, so that we can:

  • Pursue parallel regulatory tracks in different jurisdictions;
  • Optimize our media presence by talking to journalists, podcasters, and medical journals;
  • Build global partnerships with patient organizations, headache centers, psychedelic advocacy groups, and retreat centers that treat this and related conditions;
  • Coordinate with medical centers willing to run supervised trials;
  • Create high-quality topical resources for patients in multiple languages, which are scarce and difficult to find; and
  • Pursue other high-impact value streams we’re ready to launch with additional capacity.

If significant funding is obtained, it will allow us to personally visit retreat centers and bring people with cluster headaches to suitable settings where they can experiment with these therapies, and where we can study them thanks to the QRI approaches to systematic phenomenology mapping, including EEG and biorhythms monitoring. This might turn out to be really important, possibly allowing us to determine what aspect of psilocybin/DMT relieves the pain. Our working assumption, based on many interviews with sufferers, is that DMT’s “body vibration” effect is key for its pain relief – if true, this is something we could significantly optimize by developing more targeted therapies.

While our network of volunteers is growing (see Slack below), having dedicated paid staff accelerates our efforts dramatically. The faster we move, the louder we say “no” to overlooked suffering.


Can’t Donate But Want to Help?

There are many high-impact ways to contribute beyond financial support:

  • Sign the open letter – Adding your name increases our legitimacy and helps shift the Overton window.
  • Share patient testimonials – If you have cluster headaches and have used psychedelics, your story can help build the evidence base. We believe that video testimonials from sufferers, in particular, are especially powerful. Recordings showing the moment itself where psilocybin/DMT relieves the suffering in real time might have the most emotional resonance overall.
  • Join our Slack – We list simple but high-impact volunteer tasks (translations, social media, research assistance, essay feedback, etc).
  • Connect us with key people – Do you know journalists, podcasters, clinicians, policy makers, or potential donors? Introductions are greatly appreciated!
  • Spread the word – Share this essay, talk about cluster headaches with the right mood, and become the relieving change you want to see and experience in the world.

Conclusion

With all these reasons in mind, ClusterFree satisfies the utilitarian, the virtue ethicist, the long-term strategist, the person who wants meaning, the person who values courage, the person who wants to accumulate spiritual merit, the person who wants to bring these therapies to the FDA approval status, the person who just wants to see real humans stop screaming in pain, and the one who embodies all these motivations simultaneously.

Donate to ClusterFree

Donate to QRI (the incubator organization that made this possible, and conducts more aligned efforts)

Sign the open letter

Our internal coalitions can agree that this matters, and we can actually do it. Thank you.


Acknowledgments: Many thanks to Marcin Kowrygo for his generous edits of the draft. Thanks to Chris Percy, Roberto Goizueta, Hunter Meyer, and, of course, Alfredo Parra for relevant discussions and suggestions for this write-up. Huge thanks to the ClustersBusters team for their incredible and ethically urgent work (and generosity with their time to help people in need, as well as accepting being interviewed in a pinch at Psychedelic Science 2025). Thanks to Jonathan Leighton (OPIS) for inspiration, aligned work, and fighting the good fight! Thanks to Jessica Khurana (and her team) for founding Eleusina Retreat – the world’s only retreat center focused on using psychedelics, legally, for treating extreme pain conditions. Thanks to Maggie Wassinge for her copious emotional support, love, and motivation to keep doing the real work, even when it feels hopeless at times (seriously, THANK YOU). And to the spirit of Anders Amelin (RIP), who is always with us, encouraging and motivating, giving us strength and intelligence. May he rest in peace, knowing we’re pursuing our ambitious suffering-reducing goals <3 And thanks to the entire QRI team, as well as the broader qualia community at large, for creating a container where these ideas can be freely explored with curiosity and without stigma. And finally, thanks to all of the donors of QRI and ClusterFree: we will do what we can to make you proud of supporting us. Metta!


[1] On the 10^12 estimate: This is admittedly a back-of-the-envelope calculation, but here’s the reasoning. A cluster headache patient might experience anywhere from 3,000 attacks (conservative, successful treatment) to 30,000+ attacks (severe chronic cases) over their lifetime. Using a conservative estimate of 3,000 attacks averaging ~60 minutes (3,600 seconds) each gives us ~10^7 seconds of extreme pain. Now for the intensity ladder. Holding a door open might prevent ~0.1 units of discomfort, using a pinprick as 1 unit. Kidney stones, already rated 10/10 on standard pain scales, are plausibly ~1,000× more intense than a pinprick (10^3). Each second of cluster headache pain appears to be ~10× worse than kidney stones (10^4 relative to our baseline). Multiply by 10^7 seconds, and we get 10^11 from pure hedonic intensity alone. Additionally, cluster headaches impose a constant inter-ictal burden (meaning, the suffering between attacks), including PTSD, anticipatory anxiety, and a profound sense of doom between attacks (see interview with Cluster Busters founders at 53:10-53:40). This could add a 2-5X multiplier, bringing us to ~10^12. For severe cases with 10× more attacks, the calculation easily reaches 10^13 or higher. The true value likely ranges between 10^7 (very mild cases with effective treatment) and 10^16 (severe chronic cases accounting for peak intensities and suffering between attacks). Even at the conservative end, preventing cluster headaches for life remains one of the highest-impact interventions accessible to individuals. Similar back-of-the-envelope calculations can be done to put in perspective each of the steps on the “logarithmic scale of help you can provide to someone”.


Scott Alexander in “Links For December 2024” (Dec 24 2025):

13: Alfredo Parra of Qualia Research Institute on cluster headaches. Cluster headaches are plausibly the most painful medical condition. If you ask a cluster patient to rate their pain, they’ll almost always say 10/10. Does that mean the headaches are twice as painful as a 5/10 condition? There are some philosophical reasons to expect pain to be logarithmic, so plausibly cluster headaches could be orders of magnitude more painful than the average condition. Once you internalize that possibility, it throws a wrench into normal QALY ratings and suggests that, even though cluster headaches are pretty rare, they might cause a substantial portion of the global burden of disease (or even a substantial portion of the suffering in the world). Some psychedelics, especially psilocybin and DMT, seem to treat cluster headaches very effectively, so the more you believe this reanalysis, the more interested you should be in figuring out how to turn these into an accessible therapy (see clusterbusters for more information on this aspect).

And more recently in “Open Thread 409” (Nov 24 2025):

2: Qualia Research Institute announces their spinoff effort ClusterFree. Cluster headaches (aka “suicide headaches”) are probably the most painful medical condition known to science, which makes them a natural priority for some utilitarians. They seem to be extremely treatable by psychedelics like psilocybin and DMT (including sub-hallucinogenic doses), so ClusterFree is working on getting governments to research this further and maybe get these drugs into the medical pipeline (cf. ketamine for depression). There’s an open letter here, and you can contact them here. The information for patients is at the bottom of this page.

Peter Singer in his recent piece “The Best Treatment for the Most Painful Medical Condition Is Illegal” (Dec 11 2025)

A recent article in Nature: Humanities and Social Science Communications found the funding provided in the United Kingdom for research on cluster headaches to be “orders of magnitude” less than that provided for multiple sclerosis, a condition that affects a similar number of people. The authors conclude that, given that we regard the provision of anesthesia for surgery to be essential, we should also recognize relief for extreme pain as essential. Finding ways to do so should warrant the highest funding priority.

A new initiative called Clusterfree has launched global open letters calling on governments to provide legal access to psychedelics for people with cluster headache. I have signed, and I hope that you will, too.

Team Consciousness: A Philosophy of Truth-Seeking Ethics

I have not settled (and maybe it’s not for me to do it) on the core tenets of Team Consciousness. This would be a kind of philosophy or spirituality that tries to derive ethics from truth and actually get at the truth rather than a convenient approximation of it (or worse, a misrepresentation of it for the sake of memetic reproduction capacity). What I’ve thought for many years and has remained stable, is that we can reduce them to three core principles:

  1. Oneness / Frame Invariance
  2. Valence Realism
  3. Math

First, we must realize that every point in reality is equally real. There are more or less intense experiences, of course, but this is in fact a measure of how much reality is expressed in each. The core idea here is not that every experience is literally equally significant (they’re not) but that the spatiotemporal coordinates of an experience are irrelevant for their significance. Your experiences or the experiences of the members of your tribe or species are not more or less real than those of anyone else, factoring in their degree and intensity of consciousness.

The second core idea is that valence – whether experiences feel good or bad – is the source of value. More so, valence structuralism (an implication of valence realism in light of empirical observations of what feels good or bad in practice) entails that the value of reality is encoded in the geometric and topological basis of consciousness. Indeed, there are better and worse forms of being, and this is not an arbitrary matter, but one that can be investigated directly and devoid of personal prejudice.

And finally: math. It is not the same to suffer for one second versus a million years. It is not the same for one person to suffer as it is for a billion persons in torment. It is not the same for love to exist for a minute versus it being the foundation of a civilization. Amounts matter; qualities matter. This is tautological, of course. But for strange reasons, our empathizing cognitive styles often neglect math. So we ought to correct for this bug.

I think that all of ethics can be reconstructed from these principles. And in fact, they might help solve many moral paradoxes and enigmas. Just apply them diligently and rigorously and see how they allow you to discern between good and evil.

My hope is that the reproductive capacity of these three core principles will come from the fact that (1) they are true (and truth is convergent for those who seek it) and (2) they are highly beneficial and generate excess value. On (2), I’d point out that valence realism and the oneness of consciousness principle have practical implications, ranging from a science of consciousness capable of reducing depression, anxiety, and chronic pain, to future consciousness-altering technologies that will greatly enhance our intelligence and collective coordination capacities. I wish for these tenets to not acquire additional clauses that are there merely for their reproduction capacity at the cost of truth or accuracy; they should stand on their own. But these might not be the final set. I’m open to suggestions and enhancements 🙂

Costs of Embodiment

[X-Posted @ The EA Forum]

By Andrés Gómez Emilsson

Digital Sentience

Creating “digital sentience” is a lot harder than it looks. Standard Qualia Research Institute arguments for why it is either difficult, intractable, or literally impossible to create complex, computationally meaningful, bound experiences out of a digital computer (more generally, a computer with a classical von Neumann architecture) include the following three core points:

  1. Digital computation does not seem capable of solving the phenomenal binding or boundary problems.
  2. Replicating input-output mappings can be done without replicating the internal causal structure of a system.
  3. Even when you try to replicate the internal causal structure of a system deliberately, the behavior of reality at a deep enough level is not currently understood (aside from how it behaves in light of inputs-to-outputs).

Let’s elaborate briefly:

The Binding/Boundary Problem

  1. A moment of experience contains many pieces of information. It also excludes a lot of information. Meaning that, a moment of experience contains a precise, non-zero, amount of information. For example, as you open your eyes, you may notice patches of blue and yellow populating your visual field. The very meaning of the blue patches is affected by the presence of the yellow patches (indeed, they are “blue patches in a visual field with yellow patches too”) and thus you need to take into account the experience as a whole to understand the meaning of all of its parts.
  2. A very rough, intuitive, conception of the information content of an experience can be hinted at with Gregory Bateson’s (1972) “a difference that makes a difference”. If we define an empty visual field as containing zero information, it is possible to define an “information metric” from this zero state to every possible experience by counting the number of Just Noticeable Differences (JNDs) (Kingdom & Prins, 2016) needed to transform such empty visual field into an arbitrary one (note: since some JND are more difficult to specify than others, a more accurate metric should also take into account the information cost of specifying the change in addition to the size of the change that needs to be made). It is thus evident to see that one’s experience of looking at a natural landscape contains many pieces of information at once. If it didn’t, you would not be able to tell it apart from an experience of an empty visual field.
  3. The fact that experiences contain many pieces of information at once needs to be reconciled with the mechanism that generates such experiences. How you achieve this unity of complex information starting from a given ontology with basic elements is what we call “the binding problem”. For example, if you believe that the universe is made of atoms and forces (now a disproven ontology), the binding problem will refer to how a collection of atoms comes together to form a unified moment of experience. Alternatively, if one’s ontology starts out fully unified (say, assuming the universe is made of physical fields), what we need to solve is how such a unity gets segmented out into individual experiences with precise information content, and thus we talk about the “boundary problem”.
  4. Within the boundary problem, as Chris Percy and I argued in Don’t Forget About the Boundary Problem! (2023), the phenomenal (i.e. experiential) boundaries must satisfy stringent constraints to be viable. Namely, among other things, phenomenal boundaries must be:
    1. Hard Boundaries: we must avoid “fuzzy” boundaries where information is only “partially” part of an experience. This is simply the result of contemplating the transitivity of the property of belonging to a given experience. If a (token) sensation A is part of a visual field at the same time as a sensation B, and B is present at the same time as C, then A and C are also both part of the same experience. Fuzzy boundaries would break this transitivity, and thus make the concept of boundaries incoherent. As a reductio ad absurdum, this entails phenomenal boundaries must be hard.
    2. Causally significant (i.e. non-epiphenomenal): we can talk about aspects of our experience, and thus we can know they are part of a process that grants them causal power. More so, if structured states of consciousness did not have causal effects in some way isomorphic to their phenomenal structure, evolution would simply have no reason to recruit them for information processing. Albeit epiphenomenal states of consciousness are logically coherent, the situation would leave us with no reason to believe, one way or the other, that the structure of experience would vary in a way that mirrors its functional role. On the other hand, states of consciousness having causal effects directly related to their structure (the way they feel like) fits the empirical data. By what seems to be a highly overdetermined Occam’s Razor, we can infer that the structure of a state of consciousness is indeed causally significant for the organism.
    3. Frame-invariant: whether a system is conscious should not depend on one’s interpretation of it or the point of view from which one is observing it (see appendix for Johnson’s (2015) detailed description of frame invariance as a theoretical constraint within the context of philosophy of mind).
    4. Weakly emergent on the laws of physics: we want to avoid postulating either that there is a physics-violating “strong emergence” at some level of organization (“reality only has one level” – David Pearce) or that there is nothing peculiar happening at our scale. Bound, casually significant, experiences could be akin to superfluid helium. Namely, entailed by the laws of physics, but behaviorally distinct enough to play a useful evolutionary role.
  5. Solving the binding/boundary problems does not seem feasible with a von Neumann architecture in our universe. The binding/boundary problem requires the “simultaneous” existence of many pieces of information at once, and this is challenging using a digital computer for many reasons:
    1. Hard boundaries are hard to come by: looking at the shuffling of electrons from one place to another in a digital computer does not suggest the presence of hard boundaries. What separates a transistor’s base, collector, and emitter from its immediate surroundings? What’s the boundary between one pulse of electricity and the next? At best, we can identify functional “good enough” separations, but no true physics-based hard boundaries.
    2. Digital algorithms lack frame invariance: how you interpret what a system is doing in terms of classic computations depends on your frame of reference and interpretative lens.
    3. The bound experiences must themselves be causally significant. While natural selection seemingly values complex bound experiences, our digital computer designs precisely aim to denoise the system as much as possible so that the global state of the computer does not influence in any way the lower-level operations. At the algorithmic level, the causal properties of a digital computer as a whole, by design, are never more than the strict sum of their parts.

Matching Input-Output-Mapping Does Not Entail Same Causal Structure

Even if you replicate the input-output mapping of a system, that does not mean you are replicating the internal causal structure of the system. If bound experiences are dependent on specific causal structures, they will not happen automatically without considerations for the nature of their substrate (which might have unique, substrate-specific, causal decompositions). Chalmers’ (1995) “principle of organizational invariance” assumes that replicating a system’s functional organization at a fine enough grain will reproduce identical conscious experiences. However, this may be question-begging if bound experiences require holistic physical systems (e.g. quantum coherence). In such a case, the “components” of the system might be irreducible wholes, and breaking them down further would result in losing the underlying causal structure needed for bound experiences. This suggests that consciousness might emerge from physical processes that cannot be adequately captured by classical functional descriptions, regardless of their granularity.

More so, whether we realize it or not, it is always us (indeed complex bound experiences) who interpret the meaning of the input and the output of a physical system. It is not interpreted by the system itself. This is because the system has no real “points of view” from which to interpret what is going on. This is a subtle point, and will merely mention it for now, but a deep exposition of this line of argument can be found in The View From My Topological Pocket (2023).

We more so would point out that the system that is smuggling a “point of view” to interpret a digital computer’s operations is in the human who builds, maintains, and utilizes it. If we want a system to create its “own point of view” we will need to find the way for it to bind the information in a (1) “projector”/screen, (2) an actual point of view proper, or (3) the backwards lightcone that feeds into such a point of view. As argued, none of these are viable solutions.

Reality’s Deep Causal Structure is Poorly Understood

Finally, another key consideration that has been discussed extensively is that the very building blocks of reality have unclear, opaque causal structures. Arguably, if we want to replicate the internal causal structure of a conscious system, the classical input-output mapping is therefore not enough. If you want to ensure that what is happening inside the system has the same causal structure as its simulated counterpart, you would also need to replicate how the system would respond to non-standard inputs, including x-rays, magnetic fields, and specific molecules (e.g. Xenon isotopes).

These ideas have all been discussed at length in articlespodcastspresentations, and videos. Now let’s move on to a more recent consideration we call “Costs of Embodiment”.

Costs of Embodiment

Classical “computational complexity theory” is often used as a silver bullet “analytic frame” to discount the computational power of systems. Here is a typical line of argument: under the assumption that consciousness isn’t the result of implementing a quantum algorithm per se, the argument goes, then there is “nothing that it can do that you couldn’t do with a simulation of the system”. This, however, is neglecting the complications that come from instantiating a system in the physical world with all that it entails. To see why, we must first explain the nature of this analytic style in more depth:

Introduction to Computational Complexity Theory

Computational complexity theory is a branch of computer science that focuses on classifying computational problems according to their inherent difficulty. It primarily deals with the resources required to solve problems, such as time (number of steps) and space (memory usage).

Key concepts in computational complexity theory include:

  1. Big O notation: Used to describe the upper bound of an algorithm’s rate of growth.
  2. Complexity classes: Categories of problems with similar resource requirements (e.g., P, NP, PSPACE).
  3. Time complexity: Measure of how the running time increases with the size of the input.
  4. Space complexity: Measure of how memory usage increases with the size of the input.

In brief, this style of analysis is suited for analyzing the properties of algorithms that are implementation-agnostic, abstract, and interpretable in the form of pseudo-code. Alas, the moment you start to ground these concepts in the real physical constraints to which life is subjected, the relevance and completeness of the analysis starts to fall apart. Why? Because:

  1. Big O notation counts how the number of steps (time complexity) or number of memory slots (space complexity) grows with the size of the input (or in some cases size of the output). But not all steps are created equal:
    1. Flipping the value of a bit might be vastly cheaper in the real world than moving the value of a bit to another location that is very (physically far) in the computer.
    2. Likewise, some memory operations are vastly more costly than others: in the real world you need to take into account the cost of redundancy, distributed error correction, and entropic decay of structures not in use at the time.
  2. Not all inputs and outputs are created equal. Taking in some inputs might be vastly more costly than others (e.g. highly energetic vibrations that shake the system apart mean something to a biological organism as it needs to adapt to the possible stress induced by the nature of the input, expressing certain outputs might be much more costly than others, as the organism needs to reconfigure itself to deliver the result of the computation, a cost that isn’t considered by classical computational complexity theory).
  3. Interacting with a biological system is a far more complex activity than interacting with, say, logic gates and digital memory slots. We are talking about a highly dynamic, noisy, soup of molecules with complex emergent effects. Defining an operation in this context, let alone its “cost”, is far from trivial.
  4. Artificial computing architectures are designed, implemented, maintained, reproduced, and interpreted by humans who, if we are to believe already have powerful computational capabilities, are giving the system an unfair advantage over biological systems (which require zero human assistance).

Why Embodiment May Lead to Underestimating Costs

Here is a list of considerations that highlight the unique costs that come with real-world embodiment for information-processing systems beyond the realm of mere abstraction:

  1. Physical constraints: Traditional complexity theory often doesn’t account for physical limitations of real-world systems, such as heat dissipation, energy consumption, and quantum effects.
  2. Parallel processing: Biological systems, including brains, operate with massive adaptive parallelism. This is challenging to replicate in classical computing architectures and may require different cost analyses.
  3. Sensory integration: Embodied systems must process and integrate multiple sensory inputs simultaneously, which can be computationally expensive in ways not captured by standard complexity measures.
  4. Real-time requirements: Embodied systems often need to respond in real-time to environmental stimuli, adding temporal constraints that may increase computational costs.
  5. Adaptive learning: The ability to learn and adapt in real-time may incur additional computational costs not typically considered in classical complexity theory.
  6. Robustness to noise: Physical systems must be robust to environmental noise and internal fluctuations, potentially requiring redundancy and error-correction mechanisms that increase computational costs.
  7. Energy efficiency: Biological systems are often highly energy-efficient, which may come at the cost of increased complexity in information processing.
  8. Non-von Neumann architectures: Biological neural networks operate on principles different from classical computers, potentially involving computational paradigms not well-described by traditional complexity theory.
  9. Quantum effects: At the smallest scales, quantum mechanical effects may play a role in information processing, adding another layer of complexity not accounted for in classical theories.
  10. Emergent properties: Complex systems may exhibit physical emergent properties that arise from the interactions of simpler components and as well as phase transitions, potentially leading to computational costs that are difficult to predict or quantify using standard methods.

See appendix for a concrete example of applying these considerations to an abstract and embodied object recognition system (example provided by Kristian Rönn).

Case Studies:

1.  2D Computers

It is well known in classical computing theory that a 2D computer can implement anything that an n-dimensional computer can do. Namely, because it is possible to create a 2D Turing Machine capable of simulating arbitrary computers of this class (to the extent that there is a computational complexity equivalence between an n-dimensional computer and a 2D computer), we see that (at the limit) the same runtime complexity as the original computer in 2D should be achievable.

However, living in a 2D plane comes with enormous challenges that highlight the cost of embodiment present in a given media. In particular, we will see that the *routing costs* of information will grow really fast, as the channels that connect between different parts of the computer will need to take turns in order to allow for the crossed wires to transmit information without saturating the medium of (wave/information) propagation.

A concrete example here comes from examining what happens when you divide a circle into areas. Indeed, this is a well-known math problem, where you are supposed to derive a general formula for the number of areas by which a circle gets divided when you connect n (generally placed) points in its periphery. The takeaway of this exercise is often to point out that even though at first the number of areas seem to be powers of 2 (2, 4, 8, 16…) eventually the pattern is broken (the number after 16 is, surprisingly, 31 and not 32).

For the purpose of this example we shall simply focus on the growth of edges vs. the growth of crossings between the edges as we increase the number of nodes. Since every pair of nodes has an edge, the formula for the number of edges as a function of the number of nodes n is: n choose 2. Similarly, any four points define a single unique crossing, and thus the formula for the number of crossings is: n choose 4. When n is small (6 or less), the number of crossings is smaller or equal to the number of edges. But as soon as we hit 7 nodes, the number of crossings dominates over the number of edges. Asymptotically, in fact, the growth of edges is O(n^2) using the Big O notation, whereas the number of crossings ends up being O(n^4), which is much faster. If this system is used in the implementation of an algorithm that requires every pair of nodes to interact with each other once, we may at first be under the impression that the complexity will grow as O(n^2). But if this system is embodied, messages between the nodes will start to collide with each other at the crossings. Eventually, the number of delays and traffic jams caused by the embodiment of the system in 2D will dominate the time complexity of the system.

2. Blind Systems: Bootstrapping a Map Isn’t Easy

A striking challenge that biological systems need to tackle to instantiate moments of experience with useful information arises when we consider the fact that, at conception, biological systems lack a pre-existing “ground truth map” of their own components, i.e. where they are, and where they are supposed to be. In other words, biological systems somehow bootstrap their own internal maps and coordination mechanisms from a seemingly mapless state. This feat is remarkable given the extreme entropy and chaos at the microscopic level of our universe.

Assembly Theory (AT) (2023) provides an interesting perspective on this challenge. AT conceptualizes objects not as simple point particles, but as entities defined by their formation histories. It attempts to elucidate how complex, self-organizing systems can emerge and maintain structure in an entropic universe. However, AT also highlights the intricate causal relationships and historical contingencies underlying such systems, suggesting that the task of self-mapping is far from trivial.

Consider the questions this raises: How does a cell know its location within a larger organism? How do cellular assemblies coordinate their components without a pre-existing map? How are messages created and routed without a predefined addressing system and without colliding with each other? In the context of artificial systems, how could a computer bootstrap its own understanding of its architecture and component locations without human eyes and hands to see and place the components in their right place?

These questions point to the immense challenge faced by any system attempting to develop self-models or internal mappings from scratch. The solutions found in biological systems might potentially rely on complex, evolved mechanisms that are not easily replicated in classical computational architectures. This suggests that creating truly self-understanding artificial systems capable of surviving in a hostile, natural environment, may require radically different approaches than those currently employed in standard computing paradigms.

How Does the QRI Model Overcome the Costs of Embodiment?

This core QRI article presents a perspective on consciousness and the binding problem that aligns well with our discussion of embodiment and computational costs. It proposes that moments of experience correspond to topological pockets in the fields of physics, particularly the electromagnetic field. This view offers several important insights:

  1. Frame-invariance: The topology of vector fields is Lorentz invariant, meaning it doesn’t change under relativistic transformations. This addresses the need for a frame-invariant basis for consciousness, which we identified as a challenge for traditional computational approaches.
  2. Causal significance: Topological features of fields have real, measurable causal effects, as exemplified by phenomena like magnetic reconnection in solar flares. This satisfies the requirement for consciousness to be causally efficacious and not epiphenomenal.
  3. Natural boundaries: Topological pockets provide objective, causally significant boundaries that “carve nature at its joints.” This contrasts with the difficulty of defining clear system boundaries in classical computational models.
  4. Temporal depth: The approach acknowledges that experiences have a temporal dimension, potentially lasting for tens of milliseconds. This aligns with our understanding of neural oscillations and provides a natural way to integrate time into the model of consciousness.
  5. Embodiment costs: The topological approach inherently captures many of the “costs of embodiment” we discussed earlier. The physical constraints, parallel processing, sensory integration, and real-time requirements of embodied systems are naturally represented in the complex topological structures of the brain’s electromagnetic field.

This perspective suggests that the computational costs of consciousness may be even more significant than traditional complexity theory would indicate. It implies that creating artificial consciousness would require not just simulating neural activity, but replicating the precise topological structures of electromagnetic fields in the brain. This is a far more challenging task than conventional AI approaches.

Moreover, this view provides a potential explanation for why embodied systems like biological brains are so effective at producing consciousness. The physical structure of the brain, with its complex networks of neurons and electromagnetic fields, may be ideally suited to creating the topological pockets that correspond to conscious experiences. This suggests that embodiment is not just a constraint on consciousness, but a fundamental enabler of it.

Furthermore, there is a non-trivial connection between topological segmentation and resonant modes. The larger a topological pocket is, the lower the frequency of the resonant modes can be. This, effectively, is broadcasted to every region within the pocket (much akin how any spot on the surface of an acoustic guitar expresses the vibrations of the guitar as a whole). Thus, topological segmentation, quite conceivably, might be implicated in the generation of maps for the organism to self-organize around (cf. bioelectric morphogenesis according to Michael Levin, 2022). Steven Lehar (1999) and Michael E. Johnson (2018) in particular have developed really interesting conceptual frameworks for how harmonic resonance might be implicated in the computational character of our experience. The QRI insight that topology can mediate resonance, further complicates the role of phenomenal boundaries in the computational role of consciousness.

Conclusion and Path Forward

In conclusion, the costs of embodiment present significant challenges to creating digital sentience that traditional computational complexity theory fails to fully capture. The QRI solution to the boundary problem, with its focus on topological pockets in electromagnetic fields, offers a promising framework for understanding consciousness that inherently addresses many of these embodiment costs. Moving forward, research should focus on: (1) developing more precise methods to measure and quantify the costs of embodiment in biological systems, (2) exploring how topological features of electromagnetic fields could be replicated or simulated in artificial systems, and (3) investigating the potential for hybrid systems that leverage the natural advantages of biological embodiment while incorporating artificial components (cf. Xenobots). By pursuing these avenues, we may unlock new pathways towards creating genuine artificial consciousness while deepening our understanding of natural consciousness.

It is worth noting that the QRI mission is to “understand consciousness for the benefit of all sentient beings”. Thus, figuring out the constraints that give rise to computationally non-trivial bound experiences is one key piece of the puzzle: we don’t want to accidentally create systems that are conscious and suffering and become civilizationally load-bearing (e.g. organoids animated by pain or fear).

In other words, understanding how to produce conscious systems is not enough. We need to figure out how to (a) ensure that they are animated by information-sensitive gradients of bliss, and (b) how being empowered by the computational properties of consciousness can lean into more benevolent mind architectures. Namely, architectures that care about their wellbeing and the wellbeing of all sentient beings. This is an enormous challenge; clarifying the costs of embodiment is one key step forward, but part of an ecosystem of actions and projects needed for the robust positive impact of consciousness research for the wellbeing of all sentient beings.

Acknowledgments:

This post was written at the July 2024 Qualia Research Institute Strategy Summit in Sweden. It comes about as a response to incisive questions by Kristian Rönn on QRI’s model of digital sentience. Many thanks to Curran Janssen, Oliver Edholm, David Pearce, Alfredo Parra, Asher Soryl, Rasmus Soldberg, and Erik Karlson, for brainstorming, feedback, suggesting edits, and the facilitation of this retreat.

Appendix

Excerpt from Michael E. Johnson’s Principia Qualia (2015) on Frame Invariance (pg. 61)

What is frame invariance?

A theory is frame-invariant if it doesn’t depend on any specific physical frame of reference, or subjective interpretations to be true. Modern physics is frame-invariant in this way: the Earth’s mass objectively exerts gravitational attraction on us regardless of how we choose to interpret it. Something like economic theory, on the other hand, is not frame-invariant: we must interpret how to apply terms such as “GDP” or “international aid” to reality, and there’s always an element of subjective judgement in this interpretation, upon which observers can disagree.

Why is frame invariance important in theories of mind?

Because consciousness seems frame-invariant. Your being conscious doesn’t depend on my beliefs about consciousness, physical frame of reference, or interpretation of the situation – if you are conscious, you are conscious regardless of these things. If I do something that hurts you, it hurts you regardless of my belief of whether I’m causing pain. Likewise, an octopus either is highly conscious, or isn’t, regardless of my beliefs about it.[a] This implies that any ontology that has a chance of accurately describing consciousness must be frame-invariant, similar to how the formalisms of modern physics are frame-invariant.

In contrast, the way we map computations to physical systems seems inherently frame-dependent. To take a rather extreme example, if I shake a bag of popcorn, perhaps the motion of the popcorn’s molecules could – under a certain interpretation – be mapped to computations which parallel those of a whole-brain emulation that’s feeling pain. So am I computing anything by shaking that bag of popcorn? Who knows. Am I creating pain by shaking that bag of popcorn? Doubtful… but since there seems to be an unavoidable element of subjective judgment as to what constitutes information, and what constitutes computation, in actual physical systems, it doesn’t seem like computationalism can rule out this possibility. Given this, computationalism is frame-dependent in the sense that there doesn’t seem to be any objective fact of the matter derivable for what any given system is computing, even in principle.

[a] However, we should be a little bit careful with the notion of ‘objective existence’ here if we wish to broaden our statement to include quantum-scale phenomena where choice of observer matters.

Example of Cost of Embodiment by Kristian Rönn

Abstract Scenario (Computational Complexity):

Consider a digital computer system tasked with object recognition in a static environment. The algorithm processes an image to identify objects, classifies them, and outputs the results.

Key Points:

  • The computational complexity is defined by the algorithm’s time and space complexity (e.g., O(n^2) for time, O(n) for space).
  • Inputs (image data) and outputs (object labels) are well-defined and static.
  • The system operates in a controlled environment with no physical constraints like heat dissipation or energy consumption.

However, this abstract analysis is extremely optimistic, since it doesn’t take the cost of embodiment into account.

Embodied Scenario (Embodied Complexity):

Now, consider a robotic system equipped with a camera, tasked with real-time object recognition and interaction in a dynamic environment.

Key Points and Costs:

  1. Real-Time Processing:
    • The robot must process images in real-time, requiring rapid data acquisition and processing, which creates practical constraints.
    • Delays in computation can lead to physical consequences, such as collisions or missed interactions.
  2. Energy Consumption:
    • The robot’s computational tasks consume power, affecting the overall energy budget.
    • Energy management becomes crucial, balancing between processing power and battery life.
  3. Heat Dissipation:
    • High computational loads generate heat, necessitating cooling mechanisms, requiring additional energy. Moreover, this creates additional costs/waste in the embodied system.
    • Overheating can degrade performance and damage components, requiring thermal management strategies.
  4. Physical Constraints and Mobility:
    • The robot must move and navigate through physical space, encountering obstacles and varying terrains.
    • Computational tasks must be synchronized with motion planning and control systems, adding complexity.
  5. Sensory Integration:
    • The robot integrates data from multiple sensors (camera, lidar, ultrasonic sensors) to understand its environment.
    • Processing multi-modal sensory data in real-time increases computational load and complexity.
  6. Error Correction and Redundancy:
    • Physical systems are prone to noise and errors. The robot needs mechanisms for error detection and correction.
    • Redundant systems and fault-tolerance measures add to the computational overhead.
  7. Adaptation and Learning:
    • The robot must adapt to new environments and learn from interactions, requiring active inference (i.e. we can’t train a new model everytime the ontology of an agent needs updating).
    • Continuous learning in an embodied system is resource-intensive compared to offline training in a digital system.
  8. Physical Wear and Maintenance:
    • Physical components wear out over time, requiring maintenance and replacement.
    • Downtime for repairs affects the overall system performance and availability.

An Energy Complexity Model for Algorithms

Roy, S., Rudra, A., & Verma, A. (2013). https://doi.org/10.1145/2422436.2422470

Abstract

Energy consumption has emerged as a first class computing resource for both server systems and personal computing devices. The growing importance of energy has led to rethink in hardware design, hypervisors, operating systems and compilers. Algorithm design is still relatively untouched by the importance of energy and algorithmic complexity models do not capture the energy consumed by an algorithm. In this paper, we propose a new complexity model to account for the energy used by an algorithm. Based on an abstract memory model (which was inspired by the popular DDR3 memory model and is similar to the parallel disk I/O model of Vitter and Shriver), we present a simple energy model that is a (weighted) sum of the time complexity of the algorithm and the number of ‘parallel’ I/O accesses made by the algorithm. We derive this simple model from a more complicated model that better models the ground truth and present some experimental justification for our model. We believe that the simplicity (and applicability) of this energy model is the main contribution of the paper. We present some sufficient conditions on algorithm behavior that allows us to bound the energy complexity of the algorithm in terms of its time complexity (in the RAM model) and its I/O complexity (in the I/O model). As corollaries, we obtain energy optimal algorithms for sorting (and its special cases like permutation), matrix transpose and (sparse) matrix vector multiplication.

Thermodynamic Computing

Conte, T. et al. (2019). https://arxiv.org/abs/1911.01968

Abstract

The hardware and software foundations laid in the first half of the 20th Century enabled the computing technologies that have transformed the world, but these foundations are now under siege. The current computing paradigm, which is the foundation of much of the current standards of living that we now enjoy, faces fundamental limitations that are evident from several perspectives. In terms of hardware, devices have become so small that we are struggling to eliminate the effects of thermodynamic fluctuations, which are unavoidable at the nanometer scale. In terms of software, our ability to imagine and program effective computational abstractions and implementations are clearly challenged in complex domains. In terms of systems, currently five percent of the power generated in the US is used to run computing systems – this astonishing figure is neither ecologically sustainable nor economically scalable. Economically, the cost of building next-generation semiconductor fabrication plants has soared past $10 billion. All of these difficulties – device scaling, software complexity, adaptability, energy consumption, and fabrication economics – indicate that the current computing paradigm has matured and that continued improvements along this path will be limited. If technological progress is to continue and corresponding social and economic benefits are to continue to accrue, computing must become much more capable, energy efficient, and affordable. We propose that progress in computing can continue under a united, physically grounded, computational paradigm centered on thermodynamics. Herein we propose a research agenda to extend these thermodynamic foundations into complex, non-equilibrium, self-organizing systems and apply them holistically to future computing systems that will harness nature’s innate computational capacity. We call this type of computing “Thermodynamic Computing” or TC.

Energy Complexity of Computation

Say, A.C.C. (2023). https://doi.org/10.1007/978-3-031-38100-3_1

Abstract

Computational complexity theory is the study of the fundamental resource requirements associated with the solutions of different problems. Time, space (memory) and randomness (number of coin tosses) are some of the resource types that have been examined both independently, and in terms of tradeoffs between each other, in this context. Since it is well known that each bit of information “forgotten” by a device is linked to an unavoidable increase in entropy and an associated energy cost, one can also view energy as a computational resource. Constant-memory machines that are only allowed to access their input strings in a single left-to-right pass provide a good framework for the study of energy complexity. There exists a natural hierarchy of regular languages based on energy complexity, with the class of reversible languages forming the lowest level. When the machines are allowed to make errors with small nonzero probability, some problems can be solved with lower energy cost. Tradeoffs between energy and other complexity measures can be studied in the framework of Turing machines or two-way finite automata, which can be rewritten to work reversibly if one increases their space and time usage.

Relevant physical limitations

  • Landauer’s limit: The lower theoretical limit of energy consumption of computation.
  • Bremermann’s limit: A limit on the maximum rate of computation that can be achieved in a self-contained system in the material universe.
  • Bekenstein bound: An upper limit on the thermodynamic entropy S, or Shannon entropy H, that can be contained within a given finite region of space which has a finite amount of energy.
  • Margolus–Levitin theorem: A bound on the maximum computational speed per unit of energy.

References

Bateson, G. (1972). Steps to an ecology of mind. Chandler Publishing Company.

Chalmers, D. J. (1995). Absent qualia, fading qualia, dancing qualia. In T. Metzinger (Ed.), Conscious Experience. Imprint Academic. https://www.consc.net/papers/qualia.html

Gómez-Emilsson, A. (2023). The view from my topological pocket. Qualia Computing. https://qualiacomputing.com/2023/10/26/the-view-from-my-topological-pocket-an-introduction-to-field-topology-for-solving-the-boundary-problem/

Gómez-Emilsson, A., & Percy, C. (2023). Don’t forget the boundary problem! How EM field topology can address the overlooked cousin to the binding problem for consciousness. Frontiers in Human Neuroscience,17. https://www.frontiersin.org/articles/10.3389/fnhum.2023.1233119 

Johnson, M. E. (2015). Principia qualia. Open Theory. https://opentheory.net/PrincipiaQualia.pdf

Johnson, M. E. (2018). A future of neuroscience. Open Theory. https://opentheory.net/2018/08/a-future-for-neuroscience/

Kingdom, F.A.A., & Prins, N. (2016). Psychophysics: A practical introduction. Elsevier.

Lehar, S. (1999). Harmonic resonance theory: An alternative to the “neuron doctrine” paradigm of neurocomputation to address gestalt properties of perception. http://slehar.com/wwwRel/webstuff/hr1/hr1.html

Levin, M. (2022). Bioelectric morphogenesis, cellular motivations, and false binaries with Michael Levin. DemystifySci Podcast. https://demystifysci.com/blog/2022/10/25/kl2d17sphsiw2trldsvkjvr91odjxv

Pearce, D. (2014). Social media unsorted postings. HEDWEB. https://www.hedweb.com/social-media/pre2014.html

Sharma, A. (2023). Assembly theory explains and quantifies selection and evolution. Nature, 622, 321–328. https://www.nature.com/articles/s41586-023-06600-9

Aligning DMT Entities: Shards, Shoggoths, and Waluigis

We have recently seen some incredible “rogue AI behavior” in Microsoft’s Bing.

While reading some of these outputs I was reminded of… rogue DMT entities. Indeed, sometimes people have DMT experiences and encounter beautiful angelic beings that want to help and heal you (and sometimes do so!), but other times people encounter demonic beings that want to harm and hurt you (and sometimes do so!).

Just as it is unwise to roll-out a technology like Bing that is full of potential misaligned subagents, I also reckon that it’s unwise to deliver DMT therapy to the masses *before* fixing this bug. While I think that responsible consenting adults *should* be allowed to experiment with DMT as they wish, the bar for “safety and effectiveness” should be much higher when we think of it as a possible mental health intervention.

Ok, so both Bing and DMT experiences can create insane rogue subagents. How are these two things more than merely superficially connected?

Someone I talked to recently was actually worried that DMT entities are perhaps controlling these AI technologies to infiltrate our world. I don’t think that’s a very likely explanation. Rather, I think there is a much more parsimonious explanation for this similarity. Namely that both involve having a predictive system spun up misaligned agents in order to fit narratives that appear in the training data. Let’s dig in!

When in Rome

Last year when playing with GPT and also after talking to Connor from Conjecture who introduced me to JanusSimulator Theory[1] (see also: Janus’ Simulators by Scott Alexander) it became clear that there is a similarity between DMT entities and the quasi-agentic simulated characters GPT-like systems spin up in order to predict the next token in text. If this is true, then this suggests that there might be interesting transpositions between the strategies and concerns discussed in the AI Alignment world and the findings from psychedelic phenomenology about how to have a good time with the beings that you encounter in far-out places. Let me explain.

A very large fraction of our nervous system is dedicated to minimizing surprise (cf. free energy principle, predictive processing). Now, I don’t think that this is all that the nervous system is doing, nor do I think it is a theory of consciousness. But it is a very important piece of the puzzle nonetheless.

QRI has championed a set of integrative models that tie together the free energy principle within the larger context of consciousness research in order to explain psychedelic phenomenology. Most recently we have been discussing the frame of “Psychedelic Thermodynamics”, which brings together Neural Annealing, Non-Linear Wave Computing, Johnson’s Symmetry Theory of Valence, and Topological Approaches to Binding.

The bit that is relevant from Psychedelic Thermodynamics here is that there is a process by which psychedelics intensify the background noise that, together with sensory stimuli, stimulate internal representations (via a process of stochastic resonance). Importantly, internal representations function as energy sinks from the point of view of the background noise, whereas they are energy sources from the point of view of other representations. 

The two key features that work as energy sinks of this background energy are symmetry and “recognition”. This was first discussed in The Hyperbolic Geometry of DMT Experiences, but it also shows up elsewhere[2]. In particular, when you can interpret an ambiguous input as “expected given the context” then that sucks energy out of the background noise in order to energize a gestalt that binds together low-level features into a coherent high-level percept (e.g. Necker Cube). When this “clicks” it will radiate out its excess energy to the rest of the field, and also *constraint* the shape of the field such that it functions as new context that changes the probability for other ambiguous sensations to collapse into representations consistent with the new gestalt. In other words, on DMT you can go from what feels like “pure undifferentiated non dual consciousness” to “this specific carnival with harlequins doing acrobatics” by collapsing how you interpret slight imperfections in the field, which then snowball into instantiating an entire realm of experience where each shape resonates with every other shape (a “vibe lock”, as we call it).

Now, once you interpret a sufficient number of features as high-level gestalts, then they will start interacting with each other and further constraining the possible interpretations of the rest of the field. This, I believe, is somewhat similar to GPT, except on a full spatiotemporal context rather than a sequence of tokens context (cf. probabilistic graphical models).

If this model is correct, as soon as you start collapsing the energized field into interpretations, then a particular narrative structure may start dominating and “making sense” of what is happening. This can indeed snowball into getting into tricky and sometimes really unpleasant situations.

Slides from: Healing Trauma With Neural Annealing

Meet the Meeseeks

In parallel, it’s important to briefly mention the role that subagents typically have in us. Namely, what Romeo Stevens calls the “Mr. Meeseeks interpretation of subagents”. The subagents are created to achieve a goal, they don’t really like existing, but will continue to hang in there until they’re convinced the goal has been met. The subagents are spun up in order to accomplish goals that would normally require you to spend a lot of attention but that cannot be simply offloaded to muscle memory (e.g. like driving a car). Typical examples are things like the response one may have to living in an environment with very negative people (say, dark triad personalities) where you need to spin up subagents that behave like them so that you can predict their next move. In cases of PTSD, it may be that part of the problem is that one created a lot of rather negative subagents (of people, situations, dynamics, actual physical hazards, etc.) and that as a collective they reinforce each other.

Hi I’m Mr. Meeseeks! I see your grandmother is emotionally abusive. I’ll pretend to be her inside your mind so that you can predict what she will do next and thus avoid getting harmed. Let me know when she’s gone so I can go *PUFF*.

The Return to Goodness

Here loving-kindness meditation can be enormously helpful. I refer you to Anders & Maggie’s meditative exercise to heal negative internal subagents (see Letter XI: Douglas Adams). Essentially what you do is visualize a container of very positive benevolent and high-valence feelings (call it unconditional love, God, primordial goodness, Buddhamind, etc. – or whatever really resonates in your inner world simulation). You then tell the story that subagents come out of that container and once they achieve their goals they go back to it in order to “merge with love” once again. You can even explain this to the subagents, and they can feel the sense of relief that comes when they finally achieve union with this primordial love. Gently guide them towards it. And if you do this over and over, you will in fact be cleaning up a lot of subagents lingering implicit in the field, until you achieve a smooth field with high-valence and a non-dual feel.

Ok, so taking stock: our field of experience can “collapse” into familiar representations when they start predicting each other, sub-agents cease to exist once they have achieved their goal, and loving-kindness exercises can help you steer lost and lingering subagents towards their re-unification with primordial love (or, again, whatever resonates with you!). More so, these subagents are embedded in the predictive processing hierarchy and will try to do exactly what you find them most likely to do. So given these conditions, how do you align DMT entities?

Aligning DMT Entities

Here are some suggestions:

* First, the simplest and most straightforward intervention is to simply get good and prosocial training data. This is highlighted by the Waluigi Effect, in which Bing sort of turns nasty *because* character trait inversion is a *trope* in human stories, and there are plenty of such stories online. This could in principle be fixed by having an AI that classifies tropes and narrative structures and filters texts that contain any hint of Waluigi tropes or character trait switching narrative structures before feeding them as training data to GPT. Similarly, in the case of DMT entities, you can go to an environment with vetted inputs that are always really wholesome. Recall: the influence that the last couple of weeks have on what comes up in a psychedelic experience is vastly larger than what you experienced a year or a decade ago. The recent inputs matter a lot, so don’t worry about the fact that you’ve seen horror movies in the past. If you’ve been consuming really wholesome media for the last three months, that will matter enormously more.

* Second, add really highly-weighted good training data that makes it so that aligned outcomes are always the most likely. In our case, this would be indeed things like exercising the “gently guide subagents to the pool of love” move so that it’s a very likely outcome and they predict that that’s what’s going to happen. Train on visualizing the Buddha with a hand up saying “don’t fear”. Internalize that “love is always stronger than fear” (which is something I actually believe in, based on many incredible experiences). And so on.

Don’t Fear

* Third, use good vibes as the base. Essentially, negative entities feed off of negatively valanced patterns. Literally, feeling somatic sensations of pinching, pressure, twisting, etc. can become the building blocks of gestalts that end up becoming negative entities. Starting out with a very positive and smooth field reduces the fuel that negative entities have to construct themselves in resonance with patterns of dissonance. We’ve heard about good outcomes from Wim Hof and chanting metta meditation before trips (YMMV!).

* Fourth, More Dakka on equanimity. Remember the teachings of Rob Burbea (“what you resist persists”) and Shinzen Young (“suffering equals pain times resistance”). Essentially, resisting negative energies makes them stronger. This is doubly so on psychedelic states of consciousness. Instead, remember that high enough equanimity, where you don’t let positive or negative vibes “move you”, maximizes the rate of stress dissipation within your nervous system, and this accelerates the rate at which negative vibes flow through you and exit your system via some kind of radiative cooling process currently not understood by science. Practice taking cold showers without stalling or flinching, or eating relatively hot peppers without resisting or letting the pain get to you. At least for DMT realms up to Magic Eye-level the physical discomfort of the state is not stronger than a cold shower… that is, if you don’t resist it! If you do resits it, the discomfort can be drastically amplified, and you can turn some waves in a glass of water into a storm.

* Fifth, going back to the Waluigi Effect: the article explains why Reinforcement Learning via Human Feedback (RLHF) doesn’t really work for it (it encourages Waluigies to hide and pretend, rather than really getting rid of them). So instead of simply “rewarding good behavior” I suggest you reward “clean subagentic structures”. There is a “vibe” to the “intentions” of subagents. And you will soon realize that Waluigies have an “ambiguous intention” vibe. Use metta to reward sub-agents that have collapsed and clean intentions instead. Importantly, this takes priority over rewarding subagents that are really good at flattering you, for example. Because you’ve been fed enough narratives where flattery turns to betrayal that this is not a guarantee of alignment.

* Sixth, I think the principles of Shard Theory might be really useful here. In particular, really notice how not only is it that you can reward sub-agents with your attention and your top-down vibes, but once they are sufficiently “alive” they can actually start to *reward each other*. This, I believe, is how you get things like “egregore possessions” and other uncanny related phenomena. More on this below. You want to have a clean and smooth field of awareness so that subagent conspiracies can be easily spotted and addressed before they snowball.

Example Entities

Finally, let me ground this with some of the common categories of DMT entities:

Shoggoths: These are entities that seem to emerge out of the resonance of interpersonal representations of preferences at the cultural level. The things that you can “recognize the field as” in this case are “people doing what they want” where what they want may be different than what you want. If you have an adversarial relationship with a particular culture or subculture and you resist these wants, they will get reinforced by you disliking them and in some cases can start to locally bind with each other until you get what some psychonauts call “an amalgam” of cultural preferences. This is also what I think people are talking about when they say they have met an “egregore” of a culture of ideology on DMT. These are hard or perhaps impossible to align: cultures are in fact self-contradictory. So the amalgam will typically hold a lot of internal contradictions, which it will then externalize. The way to deal with a Shoggoth involves re-annealing, in addition to the suggestions above. DMT Shoggoths are sort of a symptom of failed clean annealing, in that they “coagulate” rather than “click”, and are amalgams of lots of incompatible preferences loosely held by a political coalition. This could perhaps be predicted from first principles with non-linear wave computing and Shard Theory, so the fact that it does happen to people makes it a salient case for this field of study.

Demons: these are sub-agents that come up in “hell realms” which are states of consciousness where you believe that you are a bad person and deserve some kind of punishment. The demons here are just, in my opinion, doing exactly what you expect them to do, namely, punishing you. I think that in addition to equanimity and metta, these entities also respond to boosting narratives of redemption that are wholesome in nature. For example, there is this spiritual belief that demons ultimately are all on the path towards God… they are just in a more extreme version of the Parable of the Prodigal Son, and they might take thousands of years to redeem themselves. But they will do so. In this case, you sending them metta and telling them that they are actually intrinsically good, can slowly, but surely, help them unwind their dissonant configurations.

Harlequins: these are entities from what feels like some kind of “clown dimension”. They are extremely common on DMT. Because we have so many tropes of negative clowns, this can often turn ugly. I suggest you reinforce the narrative of “harlequins as tricksters who are child-like in their curiosity about consciousness”. In fact, prompted properly and softened with enough metta, harlequins can be extremely helpful for consciousness research. You can play positive sum games with them in which you give them a really good time, and in exchange they help you explore the most surprising features of the space you are inhabiting. They can become “consciousness research assistants” with a flair for the weird and wondrous.

There is of course a zoo of possible entities, and in fact many possible entities currently exist merely in potential. As we imagine new healthy and wholesome tropes in our sense-making attempts for DMT realms, I predict that we will “unlock” new and more helpful DMT beings. In particular, I think that Team Consciousness tropes can give us a really good aesthetic to use as the primary energy sink for “recognizing” entities in this space. If you ever meet Rainbow God, say hi for me. It *always* gives you a mind-blowing revelation about reality and consciousness that enriches your life for the better 😉

How does this help AI Alignment?

I will conclude by saying that studying DMT entities might actually be a way to make headway in AI alignment in two ways. First, because they genuinely can be really smart entities you can interact with, on a bounded timeframe, and who seem to share a lot of features with AI technologies. They are human-level or higher in their intelligence (because they have access to new geometries of phenomenal space and hence to novel qualia computing, and because they lack the ego defenses that make you incapable of having certain thoughts!). And second, because all of the above may actually also transpose to discussions in AI alignment. In particular, I think the above suggestions are helpful for researchers. AI alignment can expose you to a lot of mental health risks (from the belief that “we’re doomed”, to creating strong tulpas that don’t align with your own values!). The recommendations I provide above may transpose to that domain: realize that even AI alignment research makes you spin up subagents inside you! The tools I shared may be helpful to increase the mental health of anyone studying this field who is now suffering from an infestation of negative subagents. Bring them back to Love!

See also:


[1] Not to be confused with simulationism (the belief that we actually live in a computer simulation) or indirect realism about perception (the philosophical realization that all we ever have access to are the features of an internal world simulation and we don’t perceive the world “directly”)

[2] Lehar’s Harmonic Gestalt argues that this emerges naturally out of the hill-claiming towards higher harmony between internal representations. Also discussed in Healing Trauma with Neural Annealing.

Symmetry in Qualia – an Interview with Andres Gomez-Emilsson by Justin Riddle

I recently had the pleasure to talk to Justin Riddle*, who is one of the few people in academia who takes quantum theories of consciousness seriously while also doing formal neuroscience research (see his publications, which include woks on transcranial alternating current stimulation (tACS) for a number of conditions, EEG analysis for decision making, reward, and cognition, as well as concept work on the connection between fractals and consciousness).

I first met him at Toward a Science of Consciousness in Tuscon in 2016 (see my writeup about that event, which I attended with David Pearce). About a year ago I noticed that he started uploading videos about quantum theories of consciousness, which I happily watched while going on long walks. Just a few months ago, we both participated in a documentary about consciousness (more on that later!) and had the chance to sit down and record a video together. He edited our long and wide-ranging discussion into a friendly and consumable format by adding explainers and visual aids along the way. I particularly appreciate his description of “mathematical fictionalism” at 21:30 (cf. Mathematics as the Study of Patterns of Qualia).

We hope you enjoy it!

* Thanks to David Field for catalyzing this meeting 🙂


Video Description:

In episode 32 of the quantum consciousness series, Justin Riddle interviews Andres Gomez-Emilsson, the director of research at the Qualia Research Institute. Andres is passionate about understanding qualia, which is the feeling and quality of subjective experience. In this interview, we discuss many of Andres’ theories: mathematical fictionalism, models of valence, neural annealing as it pertains to psychedelic therapy, and antitolerance medications to reduce suffering.

First up, we discuss the nature of qualia and whether or not there can be a universal mathematical description of subjective experience. Andres posits that the experience of having a thought should not be confused with the thought itself. Therefore, any attempt at mathematical description will be wrapped up within the experience of the person suggesting the mathematics. As he states, mathematics is as real as the Lord of the Rings, a great story that we can tell, but not to be confused with reality itself. Next up, we discuss the symmetry theory of valence [proposed by Michael Johnson in Principia Qualia] which postulates that the structure of experience determines how good or bad an experience feels (such as the imagination of certain geometric patterns imbuing a sense of well-being whereas other patterns being anxiogenic). The geometric patterns that lead to positive valence (positive emotional experiences) are those shapes recognized as “sacred geometry”. However, Andres cautions that because these “sacred” geometric shapes generate well-being, people have used this reproducible experience to peddle New Age metaphysics. We should be cautious of the ability to generate positive experience as it can be used to manipulate people into buying into particular belief systems. Third, we discuss recent findings that single dose psilocybin in a therapeutic context can produce a lasting reduction in symptoms of depression. Andres posits that this could be explained as a form of neural annealing (see also, and also). The mind “heats up” and breaks through discordant neural pathways and through neural plasticity during the psychedelic experience will allow for the formation of new neural pathways with higher resonant properties consistent with positive valence. This contributes to Andres’ overall ontological model of reality in which the universe is a unified field of experience that is pinched off into individuals. Here, he starts with an unbroken unity of all things that is topologically segmented into individuals. Finally, Andres is a devout hedonist with the long-term goal of reducing suffering. His group at the Qualia Research Institute is investigating medications that reduce adaptation to molecules over long-term use. Go check out Andres’ YouTube channel and the Qualia Research Institute website!

~~~ Timestamps ~~~

0:00 Introduction to the Qualia Research Institute

21:28 Mathematical fictionalism and qualia

28:58 Symmetry Theory of Valence

35:23 Using subjective experience for scientific discovery

41:10 Consciousness as topological segmentation

45:19 Topographic bifurcations within the mind-field

51:07 Neural annealing in psychedelic therapy

1:02:09 Electrical oscillations in neural annealing

1:06:23 Hyperbolic geometry in the brain

1:12:16 Definition of hyperbolic geometry

1:16:23 Antitolerance medication to reduce suffering

1:23:59 Quantum computers and qualia

Website: http://www.justinriddlepodcast.com

David Pearce on the Long-Term Future of Consciousness: The Meta-Copernican Revolution

Excerpt from David Pearce‘s 2008 Diary Update (images made w/ DALL-E, except for the pictures of Shulgin):


New discoveries? Nothing dramatic. I dutifully flip through Nature each week; wade through turgid tomes of analytic philosophy; and scan Medline abstracts. A lot of the time my heart isn’t in it. Compared to an item from Dr Shulgin‘s library, the illumination can seem trivial. I very much doubt if people who have tried major psychedelics are any smarter on average than the drug-naïve; in fact psychonauts may be cognitively overwhelmed or (rarely) even brain-damaged by their experiences. To complicate comparisons further, many altered states are dross – just like innumerable textures of everyday life. But by opening up a Pandora’s box of new phenomena, psychedelics do confer an immensely richer evidential base for any theory of mind and the world – an evidential base too rich, indeed, for our existing primitive terms, language and conceptual equipment to handle. One compares the laments of physicists starved of new empirical data to test their theories beyond the low-energy Standard Model with the fate of the psychedelic investigator. For in contrast the aspiring psychonaut may be forced to abandon the empirical method, not because he exhausts the range of novel phenomenology it delivers, but because the Darwinian mind can neither cope with LSD / ketamine / salvia / DMT‘s (etc) weirdness, nor weave the novel modes of sentience disclosed into an integrated world-picture.

Alexander Shulgin in his lab. #1

Of course, claims of epochal significance cut no ice with the drug-naïve. Those innocent of drug-induced exotica see no more need to enhance their evidential base than did the cardinals (apocryphally) invited to look through Galileo‘s telescope. An a priori refusal to acknowledge the potential significance of alien modes of sentience is impossible to overcome in subjects whose experience of altered states is confined to getting drunk. Over time, even my own knowledge of these bizarre realms is fading. My ancestral namesake was briefly awoken from his dogmatic slumbers; but DP version-2008 has rejoined the ranks of the living dead in the ghetto of consensus reality.

Alexander Shulgin in his lab. #2

My assimilation isn’t yet complete. Even as a born-again sleepwalker, I sometimes wonder if there may be a first-person method alternative to drug-based investigations that can unlock novel phenomenology latent within excitable nervous tissue. There is a crying need for alternative avenues, I think, since drug-driven self-assays are for the most part not merely unlawful and taboo, but arguably can’t be practised responsibly until the substrates of well-being are guaranteed in a (hypothetical) post-Darwinian era of genetically pre-programmed bliss. I’ve thought about alternatives to using psychoactive drugs, not least because of the shallowness of my own current research compared to the richness of the empirical methodology pioneered by Dr Shulgin. In order to discover both the formal, mathematico-physical and the intrinsic, subjective properties of the world, a dual methodology of third- and first-person research is indispensable. The former can be abdicated to the physical sciences; but not the latter. Natural science offers no explanation of why we’re not zombies, an unfortunate anomaly if consciousness is fundamental both to our understanding of the world and the world itself. By forswearing the empirical method, we effectively guarantee that the mysteries of consciousness will never be solved. Whereas insentience is, so to speak, all of a piece – hypothetical “zombies” in the philosophical sense of the term are all exactly alike in being non-conscious – there are innumerable ways to be sentient: qualia are fantastically diverse in ways we’ve scarcely begun to map out. So I reckon the only way adequately to understand Reality will be both to capture its formal structure – ideally the master equation of the TOE of the Multiverse – and literally to incorporate ever more of the stuff of the world into one’s expanding psyche to explore the state-space of its textures – the “what-it’s-likeness”. Only incorporation and systematic molecular permutation can disclose the subjective features of all permutations of matter and energy: the solutions, I conjecture, to the equations of the TOE. A priori, one could never have guessed that cells of the striate cortex mediate visual experience and cells in the posterior parietal cortex mediate auditory experience, quite irrespective of their typical functional role in the sensory systems of naturally evolved organisms. We know about such phenomena – and full-blown phenomenal sunsets and symphonies – only because we instantiate the neuronal cell-assembles that embody such qualia. Thus to discover novel categories of experience, I think we should construct and personally instantiate genetically enhanced designer brain cells, systematically altering their intracellular amino acid sequences and gene expression profiles to design/discover new categories of experience as different as is sight from sound, making them part of one’s own psyche/virtual world. Or if this incorporation sounds too irreversible, perhaps we might splice in designer genes and allelic combinations for new modes of experience into subsets of our existing nerve cells, systematically coding new protein sequences into discrete areas of the brain and then selectively expressing the designer proteins they code for at will. Eventually, however, systematic manipulation of the molecular ingredients of one’s neural porridge/mind-dust can be harnessed to mind-expansion in the literal sense. This is because we need bigger mind/brains, not just to mirror external reality more effectively, but also to discover more of its subjective properties. Such discoveries can only be accomplished empirically.

New neuron types for new neurotypes.

I suppose what drives me here is reflection on just how (superficially) trivial are the neurochemical differences between nerve cells mediating, say, phenomenal colour and phenomenal sound – and indeed reflection on how (superficially) trivial are the molecular differences in the cells mediating the phenomenology of desire, volition and belief-episodes. How can such tiny molecular differences exert such dramatic subjective effects? LSD, for instance, is undetectable in the body three hours after consumption; and yet a few hundred micrograms of the serotonin 5-HT2A partial agonist can transport the subject into outlandish alternative virtual worlds for 10 hours or more. How many analogous, radically incommensurable kingdoms of experience, mediated by equally “trivial” molecular variations, await discovery? How will the uncharted state-spaces be systematically explored? What will be the nature of life/civilisation when these kingdoms of experience are spliced together in composite minds; recruited to play an information-bearing role; harnessed to new art forms and new lifestyles; and ultimately integrated into communities of composite minds in advanced civilisations? For sure, talk of discovering a “new category of experience” doesn’t sound a particularly exciting kind of knowledge when couched in the abstract, any more than discovery of a new brand of perfume. OK, it’s a new experience; but so what? [Andrés adds: so what!?] One might sacrifice a lot for the opportunity to experience a novel phenomenal colour; but what cognitive value should be ascribed to an unknown category of experience for which one hasn’t even a name? Initially at any rate, the novel modes of experience that we discover within a modified neural proteome won’t be harnessed to senses, either internal or external, let alone harnessed to whole conceptual schemes, cultures and novel languages of thought. So they won’t play any functional role in the mind/brain: they won’t be information-bearing. But then neither are visual or auditory experiences per se; they have no intrinsic connection to sensory perception. Dreams, for instance, can be vibrantly colourful; they don’t reliably track anything in the external world. Honed by natural selection after recruitment by awake living organisms to track mind-independent patterns, visual and auditory experience has taken millions of years to play out; and who knows where it will end. By the same token, the developmental potential of new modes of experience that we discover in tweaked neurons is equally unfathomable from here.

Every scent, every color, every touch sensation, every sound, every novel qualia…

I can understand the impatience of an exasperated sceptic. What interest have novel “tickles” of experience beyond the psychopathology of the subject? Analogously, conventional wisdom in an echolocation (etc)-based civilisation might scornfully ask a similar question if and when post-chiropteran psychonauts first access drug-induced speckles of colour or jarring shrieks or whistles of sound – or perhaps when investigators recklessly explore a new methodology of mind-expansion by incorporating alien nervous tissue into their psyche. The chiropteran consensus wisdom might account the new phenomena weird but trivial – and inexpressible in language to boot. So why should any sane chiropteran mind run the risk of messing itself up just to explore such psychotic states? For our part, human ignorance of what it’s like to be a bat isn’t too unsettling because we know that bats don’t have a rich conceptual scheme, culture or technology. We are “superior” to bats; and therefore their alien modes of experience aren’t especially important. We don’t even give our ignorance much thought.

What is it like to be a bat? An empirical neural tissue insertion protocol to explore nature’s very own echolocation qualia from the comfort of your own home…

But latent in matter and energy – and flourishing in other branches of the universal wavefunction – are presumably superintellects and supercivilisations in other Everett branches whose conceptual schemes are rooted in modes of experience no less real than our own. I suspect that accessing the subjective lifeworlds of hitherto alien mind/brains will inaugurate a meta-Copernican Revolution to dwarf anything that’s come before. The textures of such alien minds are as much a natural property of matter and energy as the atomic mass of gold; and no less important to understanding the nature of the world. Needless to say, grandiose claims of new paradigms, meta-Copernican revolutions, etc, should usually be taken with a healthy grain of salt. I am loath to write such expressions, not least because I can imagine both the withering scorn of my hyper-rational but drug-naïve teenage namesake, and likewise the dismissive reaction of my drug-naïve contemporaries today. Such are the perils of a priori philosophizing practised by academic philosophers (and soi-disant scientists) unwilling to get their hands (or their minds) dirty with the empirical method. In each case, our ignorance of the intrinsic, subjective nature of configurations of most of the stuff of the world is fundamental. It’s an ignorance not remediable by simple application of the hypothetico-deductive method, falsificationismBayesianism or the usual methodologies of third-person science. If you want to find out what it’s like to be a bat, then you have to experience the phenomenology of echolocation. Knowledge-acquisition entails a hardware upgrade. A notional IQ of 200 won’t help without the neural wetware to go with it – any more than a congenitally deaf supergenius can hear music by virtuoso feats of reasoning alone.

But latent in matter and energy – and flourishing in other branches of the universal wavefunction – are presumably superintellects and supercivilisations in other Everett branches whose conceptual schemes are rooted in modes of experience no less real than our own.

I guess one deterrent to investigation of altered and exotic states is the thought that the novel phenomena disclosed “aren’t Real” – as though the reality of any phenomenon depended on it being a copy or representation of something else external to itself. I wonder if I lived in a world of Mary-like superscientists – smart monochromats who see the world in black and white – whether I would dare put on “psychedelic” spectacles and hallucinate phenomenal colour? And could I communicate to my Mary-like superscientist colleagues the significance of what they were missing without sounding like a drug-deranged crank? Probably not.

Literally Expanding Our Mind To Overcome Our Fundamental Ignorance of Alien Modes of Experience

So I reckon that we should, literally, expand our minds. If we do, how far should incorporation go? The size of the human brain is limited by the human birth-canal, a constraint that technologies of extra-uterine pregnancy from conception to term will presumably shortly overcome. Over time, brains can become superbrains; and sentience can become supersentience. Ultimately, should we aspire to become God or merely gods? My (tentative) inclination is that we should all become One [Andrés adds: see David’s Quora response on the topic of Open Individualism]; and not merely out of deference to my New Age friends. Separateness from each other is an epistemic, not just an ethical, limitation: a source of profound ignorance. For we fundamentally misconstrue the nature of other sentient beings, misunderstanding each other as objects to which we fitfully attribute feelings rather than as pure subjects. [Actually, the story is more complicated. If inferential realism about perception is true, then the sceptic about Other Minds is right, in a sense: the phenomenal people encountered in one’s egocentric world-simulation are zombies. But when one is awake, the zombies serve as avatars that causally covary with sentient beings in one’s local environment. So the point stands.] Yes, literally fusing with other minds/virtual worlds sounds an unattractive (as well as infeasible) prospect for the foreseeable future; and not just because of their lousy organic avatars. For we certainly wouldn’t want to Become One with a bunch of ugly Darwinian minds; and likewise, they might get a nasty shock if they tasted one’s own. Infatuated lovers may want to fuse; rival alpha males certainly don’t [unless one eats a defeated opponent, a form of intimacy practised in some traditional cultures; but this is a very one-sided consummation of a relationship]. However, perhaps the prospect of unification will be more exciting if and when we become posthuman smart angels, so to speak: beautiful in every sense. I have no hidden agenda beyond my abolitionist propagandizing; but on current evidence it’s likely we belong to a family of Everett branches that will lead to god-like beings. And thence to God? I’m sceptical, but I don’t know.

Mindmelding with other Darwinian creatures is kind of a bummer sometimes.

Divinity takes many forms. What kind of (demi)gods might we become? Superhappy beings, I reckon, yes, but superhappiness in what guise? A unitary Über-Mind, or fragmented minds as now? At one extreme of the continuum, posthumans may opt to live solipsistically in designer paradises: an era not just of personalized medicine but personalized VR. [Would I opt to dwell with a harem of several thousand houris and become Emperor Dave the First, Lord of The Universe? And supremely modest too. Yes, probably. I’m a Darwinian male.] Occupying the middle of the continuum is the superconnectivity of web-enabled minds (via neural implants, etc) without unitary experience or loss of personal identity. Such a scenario is a recognizable descendant of the status quo whereby we are all connected via the Net to everyone else. This sort of future is the most “obvious” since it’s an extrapolation of current trends. Extreme interconnectivity is still consistent with extensive ignorance of each other, although expansion and/or functional amplification of our mirror neurons could magnify our capacity for mutual empathetic understanding. Finally, at the other extreme of the continuum, there is presumably a more-or-less complete fusion of posthuman mind/brains into a unitary collective: a blissful analogue of the Borg, but contiguous rather than scattered: there is no evidence spatio-temporally disconnected beings have token-identical experiences. It’s hard enough to solve the binding problem in one mind/brain, let alone across discrete skulls.

Emperor Dave the First, Psychonaut Lord of The Universe, Bliss For All Creatures Under the Sun

I don’t know which if any of these three families of scenario is the most likely culmination of life in the Multiverse. Indeed it’s unclear whether the third scenario, i.e. a unitary experiential Supermind, is even technically feasible. For there is an upper limit to the size and duration of the conjectural “warm” quantum coherence needed for unitary sentience; it’s difficult enough to avoid ultra-rapid thermally-induced decoherence in even a single human mind/brain, let alone a hypothetical global super-mind/brain. Is there a way round this constraint? In spite of the well-worn dictum “black holes have no hair“, I used to play around with the idea that blissful superminds lived on the ultra-cool “surface” of supermassive black holes. All the information content of their interior and information content at the horizon is smeared out across the entire horizon, allowing unitary megaminds of maximum information density – and maximum intelligent bliss: what Seth Baum aptly calls “utilitronium”. This conjecture needs more work. But whether conscious mind is unitary or discrete, I suspect that posthuman modes of existence will be based, not on today’s ordinary waking consciousness, but on unimaginably different modes of sentience. In addition, I predict that these modes of sentience will be as different in intensity from ours as is a supernova from a glowworm. Thus any speculative story we may now be tempted to tell about what life may be like millions or billions of years hence will of necessity ignore a fundamental difference between future minds and us. Human futurology omits the key evolutionary transitions ahead in the nature of consciousness – not only the ethically all-important hedonic transition to superhappiness that I stress, but other modes of sentience currently unknown. The discontinuity promised by any future technological Singularity – or soft Singularities – derives not merely from an exponential growth of computer processing power, but from inconceivably different textures of sentience. Actually, I entertain many bizarre ideas. The art is taking them seriously enough to explore their implications and testable predictions, but sceptically enough not to be seduced into believing they are likely to be true. And what about the nearest I come to a dogmatic commitment? Could the abolitionist project turn out to be mistaken too? I guess so. Yet at least the abolition of suffering is not a phenomenon we will live to regret.

Three families of scenarios for the culmination of life in the Multiverse: #1 everyone kinda doing their own thing in their little virtual worlds. #2 hybrid hive minds of hypersocial connected individuals who choose to retain their (porous) individuality. #3 God, a single mega-mind, that maximally bounds as much matter and energy into unitary superexperiences.


See Also:

QRI Meetup in London on October 8th 2022

I’m currently in the UK. London, more precisely. I was invited to participate in this year’s instance of the Tyrinham Initiative (my review) and, naturally, I couldn’t miss it. I’m _very_ happy I went. I will share more about it and other recent DMT insights soon. But in the meantime, I just want to announce that there will be a QRI meetup on October 8th (2022) in Arch1 (West Ham Arches, Cranberry Ln, London E16 4BJ).

2022 Tyrinham Initiative attendees

QRI Meetup Schedule

  • 2PM: Space Opens.
  • 3PM: Snacks*.
  • 4PM: Experience Sharing Activity (bring an interesting experience to share with others!).
  • 6PM: Andrés (me) unveils Hedonium Shockwave (1) and delivers a speech**.
  • 7PM: Audience participation – there will be an Open Mic for people to introduce themselves, share their thoughts about QRI, and (optionally) make the case for a given Cause X (5 minutes per person)***.
  • 8PM: Food*.
  • 8PM9:30PM: Andrés available for short 1-1s. Please feel free to share your candid feedback. I’ll be all ears! (There will be a signup list).
  • 10:30PM: Wrap-up.

What to bring?

You don’t need to bring anything. Your presence is more than enough. That said, please feel free to bring with you an experience to share (think “Qualia of the Day“). This can range from perfumes, to spices, to books, to boardgames, to stim toys, to puzzles, to jokes, to nootropics, to pieces of art.

What to wear?

Please come in an attire that brings you joy. Bring at least one item (even if just a detail, like a pin or a scarf) that symbolizes the victory of consciousness over pure replicators. Be creative and open minded.

Do you have suggestions for how to accelerate the progress of QRI, help eliminate intense suffering, map the DMT realms, and achieve super-human bliss for all? I’m all ears!


* Bring vegetarian snacks, drinks, and food to share with others, if you are so inclined. Please do not bring alcoholic drinks as the space has a full bar and they don’t allow outside drinks into the venue, which extends to the garden area.

** Please do what you can to be there before 5:50PM if you intent to see the speech so that your arrival doesn’t interrupt or distract anyone. If you arrive between 6PM and 7PM, please make a quiet entrance.

*** The winner will get a prize.

Just Look At The Thing! – How The Science of Consciousness Informs Ethics

It is very easy to answer many of these fundamental biological questions; you just look at the thing! 


From Richard Feyman’s talk There’s Plenty of Room at the Bottom (1959)

Introduction

The quote above comes from a lecture Richard Feynman gave in which he talks about the challenges and opportunities of studying and interacting with the world at a very small scale. Among other things, he touches upon how gaining access to e.g. a good-enough electron microscopes would allow us to answer long-standing questions in biology by just looking at the thing (cf. Seeing Cell Division Like Never Before). Once you start to directly engage with the phenomenon at a high-enough resolution, tackling these questions at the theoretical level would turn out, in retrospect, to be idle arm chair speculation.

I think that we can make the case that philosophy of ethics at the moment might be doing something like this. In other words, it speculates about the nature of value at a theoretical level without engaging with the phenomenon of value at a high resolution. Utilitarianism (whether classical or negative), at least as it is usually formulated, may turn out to have background assumptions about the nature of consciousness, personal identity, and valence that a close examination would show to be false (or at least very incomplete). Many criticisms of wireheading, for instance, seem to conflate pleasure and reward (more on this soon), and yet we now know that these are quite different. Likewise, the repugnant conclusion or the question between total vs. mean utilitarianism are usually discussed using implicit background assumptions about the nature of valence and personal identity. This must stop. We have to look at the thing!

Without further ado, here are some of the key ways in which an enriched understanding of consciousness can inform our ethical theories:

Mixed Valence

One ubiquitous phenomenon that I find is largely neglected in discussions about utilitarianism is that of mixed valence states. Not only is it the case that there are many flavors of pleasure and pain, but it is also the case that most states of consciousness blend both pleasurable and painful sensations in complex ways.

In Principia Qualia (Michael Johnson) the valence triangle was introduced. This describes the valence of a state of consciousness in terms of its loadings on the three dimensions of negative, positive, and neutral valence. This idea was extended in Quantifying Bliss, which further enriched it by adding a spectral component to each of these dimensions. Let’s work with this valence triangle to reason about mixed valence.

In order to illustrate the relevance of mixed valence states we can see how it influences policies within the context of negative utilitarianism. Let us say that we agree that there is a ground truth to the total amount of pain and pleasure a system produces. A naïve conception of negative utilitarianism could then be “we should minimize pain”. But pain that exists within an experience that also contains pleasure may matter a lot less than pain that exists in an experience without pleasure that “balances it out”!

The naïve conception, would thus, not be able to distinguish between the following two scenarios. In Scenario A we have two persons, one suffering from both an intense headache and an intense stomach ache and the other enjoying both a very pleasant sensation in the head and a very pleasant sensation in the stomach. In Scenario B, we switch it up: one person experiences an intense headache while also a very pleasant sensation in the stomach, and the other way around for the other person.

But if you have ever experienced a very pleasant sensation arise in the midst of an otherwise unpleasant experience you will know how much of a difference it makes. Such a pleasant sensation does not need to directly blunt the painful sensation; the mere presence of enough pleasure makes the overall nature of the experience far more tolerable. How and why this happens is still, of course, a mystery (in a future post we shall share our speculations) but it seems to be an empirical fact. This can have extraordinary implications, where for example a sufficiently advanced meditator might be able to dilute very painful sensations with enough equanimity (itself a high-valence state) or by e.g. generating jhanic sensations (see below). Have you ever seen this discussed in an academic journal on ethics? I didn’t think so.

We don’t need to invoke such fancy scenarios to see the reality and importance of mixed valence states. The canonical example that I use to illustrate this phenomenon is where: you just broke up with someone (-), are at a concert enjoying really good music (+), are coming up on weed and alcohol (+), but also need to pee really bad (-). We’ve all been there, haven’t we? If you get sufficiently absorbed into the cathartic pleasure of the music and the drugs, the negative feelings temporarily recede into the background and thus might tilt the experience towards the net positive for a while.

Once you consider the reality of mixed valence states, there is a veritable Cambrian Explosion of possible variants of utilitarianism. For example, if you do accept that pleasure can somehow dilute pain within a given moment of experience, then you could posit that there is a “line of hedonic zero” on the valence triangle and anything on one side of it is net positive:

A version of negative utilitarianism we could call within-subject-aggregated-valence negative utilitarianism recognizes any experience in the “Net Positive” region to be perfectly acceptable even though it contains painful sensations.

Alternatively, another version we may call strict negative valence utilitarianism might say that pain, whether or not it is found within an experience with a lot of pleasure, is still nonetheless unacceptable. Here, however, we may still have a lot of room for a civilization animated by information-sensitive gradients of bliss: we can use the gradients that have a mixture of positive and neutral Vedanā for information signaling:

Yet another view, perhaps called within-subject-majoritarian negative valence utilitarianism might say that what makes an experience worth-living and unproblematic is for it to be at least 50% pleasant, regardless of the composition of the other 50%:

Now, I am not going to adjudicate between these views today. All I am pointing for the time being is that actually engaging with the phenomenon at hand (i.e. how valence manifests in reality) radically enriches our conceptions, and allows us to notice that most of ethics has an impoverished understanding of the phenomenon it comments on. We can change that.

Logarithmic Scales

As argued in Logarithmic Scales of Pleasure and Pain (summary) we think that there is a wide range of evidence that suggests that the intensity of both pleasure and pain follows a long-tail distribution. I am not going to repeat the arguments here, since I’ve written and presented about them extensively already. I will merely mention that I am deeply suspicious of the intellectual seriousness of any ethicist who somehow fails to notice the enormous moral significance of the following states of consciousness, among others:

On the positive side:

  • Temporal lobe epilepsy
  • MDMA
  • Jhanas
  • Good high-dose 5-MeO-DMT trip

On the negative side:

  • Cluster Headaches
  • Kidney Stones
  • Bad high-dose 5-MeO-DMT trip

Valence and Self-Models

One of the claims of QRI is that every experience, no matter how outlandish and unlike our normal everyday human experience, has valence characteristics. An analogy can be made with the notion of physical temperature: every physical object has a temperature, no matter what it is made out of or what its shape is.

Most human experiences have a lot of shared structure, with things like a central “phenomenal self” that works as an organizing principle for arranging sensations. Many meditators and psychedelic enthusiasts point out that suffering seems to have something to do with our sense of self. That feelings matter only to the extent that they are happening to someone. But experiences without a phenomenal self (or with radically altered phenomenal selves) will nonetheless still have valence characteristics. Ego deaths can be dysphoric or euphoric.

We argue that what matter is actually the overall structure of the experience (cf. valence structuralism). It just so happens that above a certain level of valence, the phenomenal self starts to become an impediment to further bliss. Ultra-pleasant experiences, thus, tend to be selfless! But this does not make them worthless. On the contrary, their intrinsic worth, coming from their positive valence, can go through the roof.

That said, reporting the valence of very exotic experiences can be remarkably difficult. This doesn’t mean that we should give up; rather, we ought to develop new methods, vocabulary, and culture to be able to place these experiences on the same moral footing as our normal everyday life.

For example, the so-called “toroidal state” (on DMT) or during a meditative cessation can have profound valence effects, to the point of making you reconsider the very nature and scope of what matters.

From The Three Doors chapter in Mastering the Core Teachings of the Buddha (Daniel Ingram):

Regardless of the way a specific door manifests, it reveals something completely extraordinary about the relationship between “the watcher” and “the watched” that it would take a very warped, non-Euclidean view of the universe to explain, though I will try shortly. One way or another, these fleeting experiences cannot easily be explained in terms of our normal, four-dimensional experience of space-time, or within our ordinary subject/object experience. […] When the no-self door predominates with suffering as its second aspect, then a very strange thing happens. There may be an image on one side staring back, but even if there isn’t, the universe becomes a toroid (doughnut-shaped), or occasionally a sphere, and the image and this side of the toroid switch places as the toroid universe spins. It may spin sideways (horizontally), or it may spin vertically (like head over heels), and may also feel like a hood of darkness suddenly being pulled over our heads as the whole thing synchronizes and disappears, or like everything twisting out of existence. The rarest no-self/suffering variant is hard to describe, and involves reality becoming like a doughnut whose whole outer edge rotates inwards such as to trade places with its inner edge (the edge that made the hole in the middle) that rotates to the outer edge position, and when they trade places reality vanishes. The spinning includes the whole background of space in all directions. Fruition occurs when the two have switched places and the whole thing vanishes.

I recommend reading the whole chapter for what I consider to be some ultra-trippy phenomenology of surprising ethical relevance (see also: No-Self vs. True Self).

In summary: this all indicates that states of consciousness have valence characteristics independently of the presence, absence, shape, or dynamic of a phenomenal self within them. If your ethicist isn’t considering the moral worth of Nirvana… perhaps consider switching to one who does.

Valence and Personal Identity

The solution to the phenomenal binding problem has implications for both personal identity and ethics. If, as I posit, each moment of experience is in fact a topological pocket in the fields of physics, then Closed Individualism would seem to be ruled out. Meaning, the standard conception of identity where you start existing when you are born and stop existing when you die would turn out to be a strange evolutionarily adaptive fiction. What really exists is a gigantic field of consciousness subdivided into countless topological pockets. Empty Individualism (“you are just a moment of experience”) and Open Individualism (“we are all the same universal consciousness”) would both be consistent with the facts, and it might be impossible to decide between them. Yet, I argue that the vast majority of ethical theories have as an implicit background assumption Closed Individualism. So realizing that it is false has major implications.

In particular, if we take the Empty Individualist perspective, it might be easier to defend negative utilitarianism: since each snapshot of experience is a completely separate being, you simply cannot “make it up” to someone who is currently suffering by giving him/her enough happiness in the future. Simply put, that suffering will never be redeemed.

Alternatively, if we take the Open Individualist perspective, we now might have actual grounds to decide between, say, average vs. total utilitarianism. Ultimately, you will be forced to experience everyone and everything. This line of reasoning becomes particularly interesting if you also take seriously something like Feynman and Wheeler’s One-electron Universe. Here we might possibly even objectively determine the moral worth of an experience in terms of “how long the one electron stays trapped inside it”. An experience with a huge spatial breadth and one with enormous temporal depth may be equivalent according to this metric: they’re just structured differently (cf. Pseudo-Time Arrow). In this account, you are bouncing backwards and forwards in time interfering with yourself forever. The multiverse is the structure emergent from this pattern of self-interference, and it is eternal and immutable in a certain sense. Relative to a small experience, a large experience would be one that keeps the one electron trapped for longer. Thus, there would be a strong case to care more about bigger and brighter experiences: you’ll be there for ages!

If indeed you are bouncing backwards and forwards forever in this structure, then perhaps average utilitarianism can be defended. In brief, since you are always somewhere, what matters is not how large the structure is, but the shape of its distribution of states.

Valence Structuralism

Finally, if you pay attention to the nature of highly valenced states of consciousness you will notice that they have structural features. The Symmetry Theory of Valence (overview; CDNS) can be experientially verified for oneself by introspecting on the structural features of one’s experience when enjoying intense bliss or enduring intense suffering. Rob Burbea’s meditation instructions are very well worth reading to get a sense of what I’m talking about. This would seem to matter a lot when it comes e.g. deciding what kind of artificial sentient minds we might want to create. Much more on this in the future.


Putting It All together

High-dose DMT experiences are an excellent example of the sort of state of consciousness that is part of reality, is generally not taken seriously in philosophy (despite its enormous significance), and has many elements that challenge preconceptions about pleasure and pain and inform our understanding of valence. These experiences:

For a theory of physics to be true it needs to be able to explain physical phenomena outside of room temperature. Likewise, for an ethical theory to be in any way true, it ought to be able to account for states of consciousness outside of the range of normal human everyday life experience. DMT states, among others, are examples of non-room-temperature states of consciousness that you can use to test if your theory of ethics actually generalizes. How do you make sense of experiences that have more qualia, have mixed valence, have exotic phenomenal selves, and have valence effects up there in the logarithmic scale? That’s what we need to answer if we are serious about ethics.

The future holds much crazier trade-offs than that between Human Flourishing vs Potatoes with Muzak. Already today, I would argue, the facts suggest that we ought to begin recognizing the reality of Hell and the ethical imperative to destroy it. And beyond, our theory of ethics ought to be powerful enough to contend with the outlandish realities of consciousness we are soon bound to encounter.


See also:

Digital Computers Will Remain Unconscious Until They Recruit Physical Fields for Holistic Computing Using Well-Defined Topological Boundaries

[Epistemic Status: written off the top of my head, thought about it for over a decade]

What do we desire for a theory of consciousness?

We want it to explain why and how the structure of our experience is computationally relevant. Why would nature bother to wire, not only information per se, but our experiences in richly structured ways that seem to track task-relevant computation (though at times in elusive ways)?

I think we can derive an explanation here. It is both very theoretically satisfying and literally mind-bending. This allows us to rule out vast classes of computing systems as having no more than computationally trivial conscious experiences.

TL;DR: We have richly textured bound experiences precisely because the boundaries that individuate us also allow us to act as individuals in many ways. This individual behavior can reflect features of the state of the entire organism in energy-efficient ways. Evolution can recruit this individual, yet holistic, behavior due to its computational advantages. We think that the boundary might be the result of topological segmentation in physical fields.


Marr’s Levels of Analysis and the Being/Form Boundary

One lens we can use to analyze the possibility of sentience in systems is this conceptual boundary between “being” and “form”. Here “being” refers to the interiority of things- their intrinsic likeness. “Form” on the other hand refers to how they appear from the outside. Where you place the being/form boundary influences how you make sense of the world around you. One factor that seems to be at play for where you place the being/form boundary is your implicit background assumptions about consciousness. In particular, how you think of consciousness in relation to Marr’s levels of analysis:

  • If you locate consciousness at the computational (or behavioral) level, then the being/form boundary might be computation/behavior. In other words, sentience simply is the performance of certain functions in certain contexts.
  • If you locate it at the algorithmic level, then the being/form boundary might become algorithm/computation. Meaning that what matters for the inside is the algorithm, whereas the outside (the form) is the function the algorithm produces.
  • And if you locate it at the implementation level, you will find that you identify being with specific physical situations (such as phases of matter and energy) and form as the algorithms that they can instantiate. In turn, the being/form boundary looks like crystals & bubbles & knots of matter and energy vs. how they can be used from the outside to perform functions for each other.

How you approach the question of whether a given chatbot is sentient will drastically depend on where you place the being/form boundary.


Many arguments against the sentience of particular computer systems are based on algorithmic inadequacy. This, for example, takes the form of choosing a current computational theory of mind (e.g. global workspace theory) and checking if the algorithm at play has the bare bones you’d expect a mind to have. This is a meaningful kind of analysis. And if you locate the being/form boundary at the algorithmic level then this is the only kind of analysis that seems to make sense.

What stops people from making successful arguments concerning the implementation level of analysis is confusion about the function for consciousness. So which physical systems are or aren’t conscious seems to be inevitably an epiphenomenalist construct. Meaning that drawing boundaries around systems with specific functions is an inherently fuzzy activity and any criteria we choose for whether a system is performing a certain function will be at best a matter of degree (and opinion).

The way of thinking about phenomenal boundaries I’m presenting in this post will escape this trap.

But before we get there, it’s important to point out the usefulness of reasoning about the algorithmic layer:

Algorithmic Structuring as a Constraint

I think that most people who believe that digital sentience is possible will concede that at least in some situations The Chinese Room is not conscious. The extreme example is when the content of the Chinese Room turns out to be literally a lookup table. Here a simple algorithmic concern is sufficient to rule out its sentience: a lookup table does not have an inner state! And what you do, from the point of view of its inner workings, is the same no matter if you relabel which input goes with what output. Whatever is inscribed in the lookup table (with however many replies and responses as part of the next query) is not something that the lookup table structurally has access to! The lookup table is, in an algorithmic sense, blind to what it is and what it does*. It has no mirror into itself.

Algorithmic considerations are important. To not be a lookup table, we must have at least some internal representations. We must consider constraints on “meaningful experience”, such as probably having at least some of, or something analogous to: a decent number of working memory slots (and types), a good size of visual field, resolution of color in terms of Just Noticeable Differences, and so on. If your algorithm doesn’t even try to “render” its knowledge in some information-rich format, then it may lack the internal representations needed to really “understand”. Put another way: imagine that your experience is like a Holodeck. Ask the question of what is the lower bound on the computational throughput of each sensory modality and their interrelationships. Then see if the algorithm you think can “understand” has internal representations of that kind at all.

Steel-manning algorithmic concerns involves taking a hard look at the number of degrees of freedom of our inner world-simulation (in e.g. free-wheeling hallucinations) and making sure that there are implicit or explicit internal representations with roughly similar computational horsepower as those sensory channels.

I think that this is actually an easy constraint to meet relative to the challenge of actually creating sentient machines. But it’s a bare minimum. You can’t let yourself be fooled by a lookup table.

In practice, the AI researchers will just care about metrics like accuracy, meaning that they will use algorithmic systems with complex internal representations like ours only if it computationally pays off to do so! (Hanson in Age of EM makes the bet it that it is worth simulating a whole high-performing human’s experience; Scott points out we’d all be on super-amphetamines). Me? I’m extremely skeptical that our current mindstates are algorithmically (or even thermodynamically!) optimal for maximally efficient work. But even if normal human consciousness or anything remotely like it was such a global optimum that any other big computational task routes around to it as an instrumental goal, I still think we would need to check if the algorithm does in fact create adequate internal representations before we assign sentience to it.

Thankfully I don’t think we need to go there. I think that the most crucial consideration is that we can rule out a huge class of computing systems ever being conscious by identifying implementation-level constraints for bound experiences. Forget about the algorithmic level altogether for a moment. If your computing system cannot build a bound experience from the bottom up in such a way that it has meaningful holistic behavior, then no matter what you program into it, you will only have “mind dust” at best.

What We Want: Meaningful Boundaries

In order to solve the boundary problem we want to find “natural” boundaries in the world to scaffold off of those. We take on the starting assumption that the universe is a gigantic “field of consciousness” and the question of how atoms come together to form experiences becomes how this field becomes individuated into experiences like ours. So we need to find out how boundaries arise in this field. But these are not just any boundary, but boundaries that are objective, frame-invariant, causally-significant, and computationally-useful. That is, boundaries you can do things with. Boundaries that explain why we are individuals and why creating individual bound experiences was evolutionarily adaptive; not only why it is merely possible but also advantageous.

My claim is that boundaries with such properties are possible, and indeed might explain a wide range of puzzles in psychology and neuroscience. The full conceptually satisfying explanation results from considering two interrelated claims and understanding what they entail together. The two interrelated claims are:

(1) Topological boundaries are frame-invariant and objective features of physics

(2) Such boundaries are causally significant and offer potential computational benefits

I think that these two claims combined have the potential to explain the phenomenal binding/boundary problem (of course assuming you are on board with the universe being a field of consciousness). They also explain why evolution was even capable of recruiting bound experiences for anything. Namely, that the same mechanism that logically entails individuation (topological boundaries) also has mathematical features useful for computation (examples given below). Our individual perspectives on the cosmos are the result of such individuality being a wrinkle in consciousness (so to speak) having non-trivial computational power.

In technical terms, I argue that a satisfactory solution to the boundary problem (1) avoids strong emergence, (2) sidesteps the hard problem of consciousness, (3) prevents the complication of epiphenomenalism, and (4) is compatible with the modern scientific world picture.

And the technical reason why topological segmentation provides the solution is that with it: (1) no strong emergence is required because behavioral holism is only weakly emergent on the laws of physics, (2) we sidestep the hard problem via panpsychism, (3) phenomenal binding is not epiphenomenal because the topological segments have holistic causal effects (such that evolution would have a reason to select for them), and (4) we build on top of the laws of physics rather than introduce new clauses to account for what happens in the nervous system. In this post you’ll get a general walkthrough of the solution. The fully rigorous, step by step, line of argumentation will be presented elsewhere. Please see the video for the detailed breakdown of alternative solutions to the binding/boundary problem and why they don’t work.

Holistic (Field) Computing

A very important move that we can make in order to explore this space is to ask ourselves if the way we think about a concept is overly restrictive. In the case of computation, I would claim that the concept is either applied extremely vaguely or that making it rigorous makes its application so narrow that it loses relevance. In the former case we have the tendency for people to equate consciousness with computation in a very abstract level (such as “resource gathering” and “making predictions” and “learning from mistakes”). In the latter we have cases where computation is defined in terms of computable functions. The conceptual mistake to avoid is to think that just because you can compute a function with a Turing machine, that therefore you are creating the same inner (bound or not) physical states along the way. And while yes, it would be possible to approximate the field behavior we will discuss below with a Turing machine, it would be computationally inefficient (as it would need to simulate a massively parallel system) and lack the bound inner states (and their computational speedups) needed for sentience.

The (conceptual engineering) move I’m suggesting we make is to first of all enrich our conception of computation. To notice that we’ve lived with an impoverished notion all along.

I suggest that our conception of computation needs to be broad enough to include bound states as possible meaningful inputs, internal steps and representations, and outputs. This enriched conception of computation would be capable of making sense of computing systems that work with very unusual inputs and outputs. For instance, it has no problem thinking of a computer that takes as input chaotic superfluid helium and returns soap bubble clusters as outputs. The reason to use such exotic medium is not to add extra steps, but in fact to remove extra steps by letting physics do the hard work for you.

(source)

To illustrate just one example of what you can do with this enriched paradigm of computing I am trying to present to you, let’s now consider the hidden computational power of soap films. Say that you want to connect three poles with a wire. And you want to minimize how much wire you use. One option is to use trigonometry and linear algebra, another one is to use numerical simulations. But an elegant alternative is to create a model of the poles between two parallel planes and then submerge the structure in soapy water.

Letting the natural energy-minimizing property of soap bubbles find the shortest connection between three poles is an interesting way of performing a computation. It is uniquely adapted to the problem without needing tweaks or adjustments – the self-organizing principle will work the same (within reason) wherever you place the poles. You are deriving computational power from physics in a very customized way that nonetheless requires no tuning or external memory. And it’s all done simply by each point of the surface wanting to minimize its tension. Any non-minimal configuration will have potential energy, which then gets transformed into kinetic energy and makes it wobble, and as it wobbles it radiates out its excess energy until it reaches a configuration where it doesn’t wobble anymore. So you have to make the solution of your problem precisely a non-wobbly state!

In this way of thinking about computation, an intrinsic part of the question about what kind of thing a computation is will depend on what physical processes were utilized to implement it. In essence, we can (and I think should) enrich our very conception of computation to include what kind of internal bound states the system is utilizing, and the extent to which the holistic physical effects of such inner states are computationally trivial or significant.

We can call this paradigm of computing “Holistic Computing”.

From Soap Bubbles to ISING-Solvers Meeting Schedulers Implemented with Lasers

Let’s make a huge jump from soap water-based computation. A much more general case that is nonetheless in the same family as using soap bubbles for compute, is having a way to efficiently solve the ISING problem. In particular, having an analog physics-based annealing method in this case comes with unique computational benefits: it turns out that non-linear optics can do this very efficiently. You are in a certain way using the universe’s very frustration with the problem (don’t worry I don’t think it suffers) to get it solved. Here is an amazing recent example: Ising Machines: Non-Von Neumann Computing with Nonlinear Optics – Alireza Marandi – 6/7/2019 (presented at Caltech).

The person who introduces Marandi in the video above is Kwabena Boahen, with whom I had the honor to take his course at Stanford (and play with the neurogrid!). Back in 2012 something like the neurogrid seemed like the obvious path to AGI. Today, ironically, people imagine scaling transformers is all you need. Tomorrow, we’ll recognize the importance of holistic field behavior and the boundary problem.

One way to get there on the computer science front will be by first demonstrating a niche set of applications where e.g. non-linear optics ISING solvers vastly outperform GPUs for energy minimization tasks in random graphs. But as the unique computational benefits become better understood, we will sooner or later switch from thinking about how to solve our particular problem, to thinking about how we can cast our particular problem as an ISING/energy minima problem so that physics solves the problem for us. It’s like having a powerful computer but it only speaks a very specific alien language. If you can translate your problem into its own terms, it’ll solve it at lightning speed. If you can’t, it will be completely useless.

Intelligence: Collecting and Applying Self-Organizing Principles

This takes us to the question of whether general intelligence is possible without switching to a Holistic Computing paradigm. Can you have generally intelligent (digital) chatbots? In some senses, yes. In perhaps the most significant sense, no.

Intelligence is a contentious topic (see here David Pearce’s helpful breakdown of 6 of its facets). One particular facet of intelligence that I find enormously fascinating and largely under-explored is the ability to make sense of new modes of consciousness and then recruit them for computational and aesthetic purposes. THC and music production have a long history of synergy, for instance. A composer who successfully uses THC to generate musical ideas others find novel and meaningful is applying this sort of intelligence. THC-induced states of consciousness are largely dysfunctional for a lot of tasks. But someone who utilizes the sort of intelligence (or meta-intelligence) I’m pointing to will pay attention to the features of experience that do have some novel use and lean on those. THC might impair working memory, but it also expands and stretches musical space. Intensifies reverb, softens rough edges in heart notes, increases emotional range, and adds synesthetic brown noise (which can enhance stochastic resonance). With wit and determination (and co-morbid THC/music addiction), musical artists exploit the oddities of THC musicality to great effect, arguably some much more successfully than others.

The kind of reframe that I’d like you to consider is that we are all in fact something akin to these stoner musicians. We were born with this qualia resonator with lots of cavities, kinds of waves, levels of coupling, and so on. And it took years for us to train it to make adaptive representations of the environment. Along the way, we all (typically) develop a huge repertoire of self-organizing principles we deploy to render what we believe is happing out there in the world. The reason why an experience of “meditation on the wetness of water” can be incredibly powerful is not because you are literally tuning into the resonant frequency of the water around you and in you. No, it’s something very different. You are creating the conditions for the self-organizing principle that we already use to render our experiences with water to take over as the primary organizer of our experience. Since this self-organizing principle does not, by its nature, generate a center, full absorption into “water consciousness” also has a no-self quality to it. Same with the other elements. Excitingly, this way of thinking also opens up our mind about how to craft meditations from first principles. Namely, by creating a periodic table of self-organizing principles and then systematically trying combinations until we identify the laws of qualia chemistry.

You have to come to realize that your brain’s relationship with self-organizing principles is like that of a Pokémon trainer and his Pokémon (ideally in a situation where Pokémon play the Glass Bead Game with each other rather than try to hurt each other– more on that later). Or perhaps like that of a mathematician and clever tricks for proofs, or a musician and rhythmic patterns, and so on. Your brain is a highly tamed inner space qualia warp drive usually working at 1% or less. It has stores of finely balanced and calibrated self-organizing principles that will generate the right atmospheric change to your experience at the drop of a hat. We are usually unaware of how many moods, personalities, contexts, and feelings of the passage of time there are – your brain tries to learn them all so it has them in store for whenever needed. All of a sudden: haze and rain, unfathomable wind, mercury resting motionless. What kind of qualia chemistry did your brain just use to try to render those concepts?

We are using features of consciousness -and the self-organizing principles it affords- to solve problems all the time without explicitly modeling this fact. In my conception of sentient intelligence, being able to recruit self-organizing principles of consciousness for meaningful computation is a pillar of any meaningfully intelligent mind. I think that largely this is what we are doing when humans become extremely good at something (from balancing discs to playing chess and empathizing with each other). We are creating very specialized qualia by finding the right self-organizing principles and then purifying/increasing their quality. To do an excellent modern day job that demands constraint satisfaction at multiple levels of analysis at once likely requires us to form something akin to High-Entropy Alloys of Consciousness. That is, we are usually a judiciously chosen mixture of many self-organizing principles balanced just right to produce a particular niche effect.

Meta-Intelligence

David Pearce’s conception of Full-spectrum Superintelligence is inspiring because it takes into account the state-space of consciousness (and what matters) in judging the quality of a certain intelligence in addition to more traditional metrics. Indeed, as another key conceptual engineering move, I suggest that we can and need to enrich our conception of intelligence in addition to our conception of computation.

So here is my attempt at enriching it further and adding another perspective. One way we can think of intelligence is as the ability to map a problem to a self-organizing principle that will “solve it for you” and having the capacity to instantiate that self-organizing principle. In other words, intelligence is, at least partly, about efficiency: you are successful to the extent that you can take a task that would generally require a large number of manual operations (which take time, effort, and are error-prone) and solve it in an “embodied” way.

Ultimately, a complex system like the one we use for empathy mixes both serial and parallel self-organizing principles for computation. Empathy is enormously cognitively demanding rather than merely a personality trait (e.g. agreeableness), as it requires a complex mirroring capacity that stores and processes information in efficient ways. Exploring exotic states of consciousness is even more computationally demanding. Both are error-prone.

Succinctly, I suggest we consider:

One key facet of intelligence is the capacity to solve problems by breaking them down into two distinct subproblems: (1) find a suitable self-organizing principle you can instantiate reliably, and (2) find out how to translate your problem to a format that our self-organizing principle can be pointed at so that it solves it for us.

Here is a concrete example. If you want to disentangle a wire, you can try to first put it into a discrete datastructure like a graph, and then get the skeleton of the knot in a way that allows you to simplify it with Reidemeister moves (and get lost in the algorithmic complexity of the task). Or you could simply follow the lead of Yu et al. 2021 and make the surfaces repulsive and let this principle solve the problem for you

(source)

These repulsion-based disentanglement algorithm are explained in this video. Importantly, how to do this effectively still needs fine tuning. The method they ended up using was much faster than the (many) other ones tried (a Full-Spectrum Superintellligence would be able to “wiggle” the wires a bit if they got stuck, of course):

(source)

This is hopefully giving you new ways of thinking about computation and intelligence. The key point to realize is that these concepts are not set in stone, and to a large extent may limit our thinking about sentience and intelligence. 

Now, I don’t believe that if you simulate a self-organizing principle of this sort you will get a conscious mind. The whole point of using physics to solve your problem is that in some cases you get better performance than algorithmically representing a physical system and then using that simulation to instantiate self-organizing principles. Moreover physics simulations, to the extent they are implemented in classical computers, will fail to generate the same field boundaries that would be happening in the physical system. To note, physics-inspired simulations like [Yu et al 2021] are nonetheless enormously helpful to illustrate how to think of problem-solving with a massively parallel analog system.

Are Neural Cellular Automata Conscious?

The computational success of Neural Cellular Automata is primarily algorithmic. In essence, digitally implemented NCA are exploring a paradigm of selection and amplification of self-organizing principles, which is indeed a very different way of thinking about computation. But critically any NCA will still lack sentience. The main reasons are that they (a) don’t use physical fields with weak downward causation, and (b) don’t have a mechanism for binding/boundary making. Digitally-implemented cellular automata may have complex emergent behavior, but they generate no meaningful boundaries (i.e. objective, frame-invariant, causally-significant, and computationally-useful). That said, the computational aesthetic of NCA can be fruitfully imported to the study of Holistic Field Computing, in that the techniques for selecting and amplifying self-organizing principles already solved for NCAs may have analogues in how the brain recruits physical self-organizing principles for computation.

Exotic States of Consciousness

Perhaps one of the most compelling demonstrations of the possible zoo (or jungle) of self-organizing principles out of which your brain is recruiting but a tiny narrow range is to pay close attention to a DMT trip.

DMT states of consciousness are computationally non-trivial on many fronts. It is difficult to emphasize how enriched the set of experiential building blocks becomes in such states. Their scientific significance is hard to overstate. Importantly, the bulk of the computational power on DMT is dedicated to trying to make the experience feel good and not feel bad. The complexity involved in this task is often overwhelming. But one could envision a DMT-like state in which some parameters have been stabilized in order to recruit standardized self-organizing principles available only in a specific region of the energy-information landscape. I think that cataloguing the precise mathematical properties of the dynamics of attention and awareness on DMT will turn out to have enormous _computational_ value. And a lot of this computational value will generally be pointed towards aesthetic goals.

To give you a hint of what I’m talking about: A useful QRI model (indeed, algorithmic reduction) of the phenomenology of DMT is that it (a) activates high-frequency metronomes that shake your experience and energize it with a high-frequency vibe, and (b) a new medium of wave propagation gets generated that allows very disparate parts of one’s experience to interact with one another.

3D Space Group (CEV on low dose DMT)

At a sufficient dose, DMT’s secondary effect also makes your experience feel sort of “wet” and “saturated”. Your whole being can feel mercurial and liquidy (cf: Plasmatis and Jim Jam). A friend speculates that’s what it’s like for an experience to be one where everything is touching everything else (all at once).

There are many Indra’s Net-type experiences in this space. In brief, experiences where “each part reflects every other part” are an energy minimum that also reduces prediction errors. And there is a fascinating non-trivial connection with the Free Energy Principle, where experiences that minimize internal prediction errors may display a lot of self-similarity.

To a first approximation, I posit that the complex geometry of DMT experiences are indeed the non-linearities of the DMT-induced wave propagation medium that appear when it is sufficiently energized (so that it transitions from the linear to the non-linear regime). In other words, the complex hallucinations are energized patterns of non-linear resonance trying to radiate out their excess energy. Indeed, as you come down you experience the phenomenon of condensation of shapes of qualia.

Now, we currently don’t know what computational problems this uncharted cornucopia of self-organizing principles could solve efficiently. The situation is analogous to that of the ISING Solver discussed above: we have an incredibly powerful alien computer that will do wonders if we can speak its language, and nothing useful otherwise. Yes, DMT’s computational power is an alien computer in search of a problem that will fit its technical requirements.

Vibe-To-Shape-And-Back

Michael Johnson, Selen Atasoy, and Steven Lehar all have shaped my thinking about resonance in the nervous system. Steven Lehar in particular brought to my attention non-linear resonance as a principle of computation. In essays like The Constructive Aspect of Visual Perception he presents a lot of visual illusions for which non-linear resonance works as a general explanatory principle (and then in The Grand Illusion he reveals how his insights were informed by psychonautic exploration).

One of the cool phenomenological observations Lehar made based on his exploration with DXM was that each phenomenal object has its own resonant frequency. In particular, each object is constructed with waves interfering with each other at a high-enough energy that they bounce off each other (i.e. are non-linear). The relative vibration of the phenomenal objects is a function of the frequencies of resonance of the waves of energy bouncing off each other that are constructing the objects.

In this way, we can start to see how a “vibe” can be attributed to a particular phenomenal object. In essence, long intervals will create lower resonant frequencies. And if you combine this insight with QRI paradigms, you see how the vibe of an experience can modulate the valence (e.g. soft ADSR envelopes and consonance feeling pleasant, for instance). Indeed, on DMT you get to experience the high-dimensional version of music theory, where the valence of a scene is a function of the crazy-complex network of pairwise interactions between phenomenal objects with specific vibratory characteristics. Give thanks to annealing because tuning this manually would be a nightmare.

But then there is the “global” vibe…

Topological Pockets

So far I’ve provided examples of how Holistic Computing enriches our conception of intelligence, computing, and how it even shows up in our experience. But what I’ve yet to do is connect this with meaningful boundaries, as we set ourselves to do. In particular, I haven’t explained why Holistic Computing would arise out of topological boundaries.

For the purpose of this essay I’m defining a topological segment (or pocket) to be a region that can’t be expanded further without this becoming false: every point in the region locally belongs to the same connected space.

The Balloons’ Case

In the case of balloons this cashes out as: a topological segment is one where each point can go to any other point without having to go through connector points/lines/planes. It’s essentially the set of contiguous surfaces.

Now, each of these pockets can have both a rich set of connections to other pockets as well as intricate internal boundaries. The way we could justify Computational Holism being relevant here is that the topological pockets trap energy, and thus allow the pocket to vibrate in ways that express a lot of holistic information. Each contiguous surface makes a sound that represents its entire shape, and thus behaves as a unit in at least this way.

The General Case

An important note here is that I am not claiming that (a) all topological boundaries can be used for Holistic Computing, or (b) to have Holistic Computing you need to have topological boundaries. Rather, I’m claiming that the topological segmentation responsible for individuating experiences does have applications for Holistic Computing and that this conceptually makes sense and is why evolution bothered to make us conscious. But for the general case, you probably do get quite a bit of both Holistic Computing without topological segmentation and vice versa. For example an LC circuit can be used for Holistic Computing on the basis of its steady analog resonance, but I’m not sure if it creates a topological pocket in the EM fields per se.

At this stage of the research we don’t have a leading candidate for the precise topological feature of fields responsible for this. But the explanation space is promising based on being able to satisfy theoretical constraints that no other theory we know of can.

But I can nonetheless provide a proof of concept for how a topological pocket does come with really impactful holism. Let’s dive in!

Getting Holistic Behavior Out of a Topological Pocket

Creating a topological pocket may be consequential in one of several ways. One option for getting holistic behavior arises if you can “trap” energy in the pocket. As a consequence, you will energize its harmonics. The particular way the whole thing vibrates is a function of the entire shape at once. So from the inside, every patch now has information about the whole (namely, by the vibration it feels!).**

(image source)

One possible overarching self-organizing principle that the entire pocket may implement is valence-gradient ascent. In particular, some configurations of the field are more pleasant than others and this has to do with the complexity of the global vibe. Essentially, the reason no part of it wants to be in a pocket with certain asymmetries, is because those asymmetries actually make themselves known everywhere within the pocket by how the whole thing vibrates. Therefore, for the same reason a soap bubble can become spherical by each point on the surface trying to locally minimize tension, our experiences can become symmetrical and harmonious by having each “point” in them trying to maximize its local valence.

Self Mirroring

From Lehar’s Cartoon Epistemology

And here we arrive at perhaps one of the craziest but coolest aspects of Holistic Computing I’ve encountered. Essentially, if we go to the non-linear regime, then the whole vibe is not merely just the weighted sum of the harmonics of the system. Rather, you might have waves interfere with each other in a concentrated fashion in the various cores/clusters, and in turn these become non-linear structures that will try to radiate out their energy. And to maximize valence there needs to be a harmony between the energy coming in and out of these dense non-linearities. In our phenomenology this may perhaps point to our typical self-consciousness. In brief, we have an internal avatar that “reflects” the state of the whole! We are self-mirroring machines! Now this is really non-trivial (and non-linear) Holistic Computing.

Cut From the Same Fabric

So here is where we get to the crux of the insight. Namely, that weakly emergent topological changes can simultaneously have non-trivial causal/computational effects while also solving the boundary problem. We avoid strong emergence but still get a kind of ontological emergence: since consciousness is being cut out of one huge fabric of consciousness, we don’t ever need strong emergence in the form of “consciousness out of the blue all of a sudden”. What you have instead is a kind of ontological birth of an individual. The boundary legitimately created a new being, even if in a way the total amount of consciousness is the same. This is of course an outrageous claim (that you can get “individuals” by e.g. twisting the electric field in just the right way). But I believe the alternatives are far crazier once you understand what they entail.

In a Nutshell

To summarize, we can rule out any of the current computational systems implementing AI algorithms to have anything but trivial consciousness. If there are topological pockets created by e.g. GPUs/TPUs, they are epiphenomenal – the system is designed so that only the local influences it has hardcoded can affect the behavior at each step.

The reason the brain is different is that it has open avenues for solving the boundary problem. In particular, a topological segmentation of the EM field would be a satisfying option, as it would simultaneously give us both holistic field behavior (computationally useful) and a genuine natural boundary. It extends the kind of model explored by Johnjoe McFadden (Conscious Electromagnetic Information Field) and Susan Pockett (Consciousness Is a Thing, Not a Process). They (rightfully) point out that the EM field can solve the binding problem. The boundary problem, in turn, emerges. With topological boundaries, finally, you can get meaningful boundaries (objective, frame-invariant, causally-significant, and computationally-useful).

This conceptual framework both clarifies what kind of system is at minimum required for sentience, and also opens up a research paradigm for systematically exploring topological features of the fields of physics and their plausible use by the nervous system.


* See the “Self Mirroring” section to contrast the self-blindness of a lookup table and the self-awareness of sentient beings.

** More symmetrical shapes will tend to have more clean resonant modes. So to the extent that symmetry tracks fitness on some level (e.g. ability to shed off entropy), then quickly estimating the spectral complexity of an experience can tell you how far it is from global symmetry and possibly health (explanation inspired by: Johnson’s Symmetry Theory of Homeostatic Regulation).


See also:


Many thanks to Michael Johnson, David Pearce, Anders & Maggie, and Steven Lehar for many discussions about the boundary/binding problem. Thanks to Anders & Maggie and to Mike for discussions about valence in this context. And thanks to Mike for offering a steel-man of epiphenomenalism. Many thank yous to all our supporters! Much love!

Infinite bliss!

Qualia Productions Presents “Thinking Like a Musical Instrument” (and other communications from our Swedish QRI advisors)

By Anders Amelin and Maggie Wassinge (QRI advisors and volunteer coordinators; see letters I & IIletters III, IV, & V, and letters VI, VII, VIII)


Happy 2022!

[Here is] your new-year’s gift video titled “Natural Stupidity and the Fermi Paradox or The Tyranny of the Intentional Object”. It is full of things rarely touched upon by respectable scientists. Such as space aliens, and the gender of God. And valence structuralism.

Next, we’ll try to film a truly serious, comedy-less little demonstration of a metallic toy percussion instrument subjected to strain followed by annealing. In what way, if any, will the tone quality change? We have no indication yet what might come off of it so it’s a bit of a falsification attempt where we might get a null result with no discernible similarity between brain on psychedelics and metal on heat. Like most respectable scientists might expect. But, just possibly, there could be something interesting in store. 


In this video we illustrate the similarity between the brain and a musical instrument. The brain tissue is represented by metal and the brain activity by sound. The effect of substances such as psychedelics and dissociatives is mimicked by heating and cooling the metal. The engineering term for such heat treatment of metal is “annealing”. What we demonstrate is a very simplified toy model but which can be surprisingly useful for understanding the overall type of system dynamics going on in brains.

The model is based on the fact that both sound and neuronal firing are examples of oscillatory activity which can have different frequency, amplitude, coherence, and damping. Hammering the metal represents the memory imprint made in the brain by our ongoing experiences. The sound pattern produced by the hammered metal contains complexity which corresponds to learning. But a side effect of the increased complexity is lower overall consonance of the oscillatory activity.

To stay healthy, the brain must periodically undergo what the Qualia Research Institute calls “neural annealing”. In a neural network model, this can be thought of as redistributing synaptic weights more globally across the connectome and thus make the learned information more harmoniously integrated and holistically retrievable. This normally happens during sleep but can become even more powerful with meditation and psychedelics.

In this demonstration where metal is annealed, it is the positions of the metal atoms which adjust themselves so that the entire piece of metal becomes a better conductor of sound. It may seem strange that this can happen, but neither the metal nor the brain is fundamentally magical. Both cases involve self-organizing system dynamics.

In the case of the brain, the activity is accompanied by conscious experiences. The Qualia Research Institute works under the assumption that these are not magical either but can be modeled mathematically in a similar way to chemistry and physics. It is then necessary to test how measurements of brain activity correlate with conscious experiences not only during sober waking life but also under conditions which are very different.

The QRI is building a new paradigm for understanding the mind and the brain. With a focus on psychedelics and other mental state altering methods as scientific research tools and candidates for use in next-generation psychiatric- and pain treatments. We are a small upstart group with opportunities for volunteers and donors to get involved. If you are interested in learning more, please contact us via this e-mail address: hello [-a-t-] qualiaresearchinstitute.org

Relevant Links:

Scenes from the video (highly recommended):


[They further elaborated:]

Here is a video of a simple experiment we did on how straining and annealing a piece of metal affects its acoustic properties. In a QRI neural annealing interpretation it looks as if something interesting is going on. Simple things like sterling silver, hand hammering and heating over open flames were used and recording done only with iPhones and a Røde SmartLav+ microphone, but the results give qualitative hints that truly hypothesis-testing quantification experiments would be feasible to do pretty straightforwardly.

Especially of interest would be the hints that annealing produces frequency shifts and reverb changes which differ between the high and low frequency ranges, and the hint that annealing reduces dissonance. For quantification of shifts in resonant frequencies, one might manufacture a series of Chladni plates made of different alloys and which could be kept always flat but be cold rolled, heated and cooled to different temperatures with various ramp rates, and trimmed at the edges to alter their geometry. Then find the resonant frequencies for each parameter configuration. As a bonus you’d be able to visualize with a sprinkle of (beach!) sand on top. Then crunch the data to make predictions about brain activity signatures under for instance various psychedelics and meditation states.

Another one could be to quantify consonance, dissonance and noise levels for various metal resonators as these are subjected to various forms of stress, strain, and heating/cooling procedures.

It would all be simple enough to almost be like an intern research project but, excitingly, it is unlikely to have been done before. (OK, do a thorough literature search of course. As always…).

Suppose QRI were to explain parsimoniously with the neural annealing paradigm how brains pull off the amazing trick of producing plasticity which is “just right” in each modality of function. Artificial neural networks can be trained to impressive levels on complex data sets but they suffer from catastrophic plasticity in the sense that training on new datasets erases learning achieved on prior sets. This makes AI narrow and also very unsafe with respect to ease of hacking. The AI alignment community provides us no answer (at least not anything very parsimonious) to how absolute firmness in the modality of core “human values” can be combined with flexible (meta) learning for AI at a humanlike generality level. That is the notorious “alignment problem”.

Ultimately every system can be hacked of course and so can human minds. But certain humans are impressive moral role models, and meditation practices seem able to make most of us come at least a little closer to them. Suppose different brain networks loosely correspond to different alloys with correspondingly different ductility, hardness, tensile strength, different annealing temperatures and different hardening and tempering responses when undergoing various stressing, straining and heating/cooling cycling. The variability in this regard found in various metals and alloys is really immense. We’d want to eventually pick out the particular ones which happen to be the most useful for brain modeling.

Consider doing these quantification experiments in metal and then presenting to possible collaborators a brain model in the form of a formalized multi-alloy configuration. Don’t emphasize phenomenology if it’s AI engineers because that is a word which may give them bad vibes. Instead just present it as brain activity in different learning modalities. Which can be formalized and turned into software. With annealing and consonance-dissonance-noise as key elements. That is probably the kind of pitch you’d want to bring up if, hypothetically, a head of an AI research group were to ask about whether solving consciousness is necessary for producing more advanced AI. Since solving consciousness sounds unpalatably difficult, the answer they’d like the most is that it is not necessary. Hence they won’t care about collaborating with QRI if it is implied that the computational properties of phenomenological mind states must be reverse-engineered. One cannot blame them, it’s dizzyingly daunting to consider. But an information processing efficient metal acoustics-inspired brain activity model and with nice things like QRI valence formalism falling into place could be much easier to pitch. Just don’t call the CDNS-emergent utility measure anything resembling psychology terms like core affect… 😉


Letter IX: On Valence as a Currency Within the Nervous System

[commenting on the video about Zero Ontology:]

A reflection: It’s interesting the way Isaac Luria, during years of meditation, came up with an “inverse” view of the way the universe was created, by subtraction from a mass of infinite potentiality (which corresponds to maximum symmetry) rather than by addition of things to an emptiness by an unexplainably pre-existing divine creator agent. Lurianic mysticism has been a strong inspiration for pantheism and atheism, and even how to think about information. A nice example of how introspection can give new clues for how to better understand the universe. An isomorphism between fractal patterns in consciousness and fractal patterns in the multiverse generator perhaps. One could argue that Luria discovered symmetry breaking by meditating. Pretty cool!

While thinking about STV […] we tried to see if there are some more arguments for STV which make sense. So we went to the Less Wrong website. Now that’s a crowd that ignores qualia but they are big on economics and system dynamics.

Found this potentially useful: Evolution of Modularity by johnswentworth.

Life exploits the possibility space of “choreography and catalysis”. It organizes pre-existing physics & chemistry phenomena into a landscape of multiple optimization attractors. Expect modularity at many levels. Certainly also within brains. Economies accomplish the same thing as life does. They and life are in a fractally self-similar universe and economies are derived from the activities of life. To accomplish the exploration and exploitation of environments.

But then with brains we have the binding problem and the mind-body problem.

The QRI could argue that a brain can be mathematically modelled as a self-organizing hierarchical system of resonant cavities. There are certain similarities between that and how an economy can be modelled. Money is what solves the binding problem in an economy at the same time as it shapes the activity patterns in the economy. A common currency gives the highest efficiency. Profit/loss is the universal preference measure. It has a positive-negative axis, and it has gradients. Account balances and balance histories (assets, liabilities, contracts, derivatives… all using  money as measure) are the measure of aggregate utility. Preference (in the moment) and utility (longer-term aggregate) form a spectrum with feedback leading to amplification of certain states. Resonances emerge.

Keeping the same currency, fractally, across scales and projective transformations? (image by: Michale Aaron Coleman)

A brain uses long term memory, working memory, and nonconscious processing in a seamless blend. These seem like quite different components but it would be expected that evolution kept the same “currency” throughout as the modularity grew. Think of phenomenological valence as a “common currency” preference measure when it is instantiated in working memory (conscious awareness) and which can also “tag” the long term memory  “bank balances” with some marker for immediate reaction along an attraction/avoidance axis which is there full blown the instant the long-term memory is transferred into working memory. The search process by which this happens must aggregate that marker as the search happens, and adjust this continuously as the final result stabilizes into the resonant patterns of working memory. Static, distributed long-term memory, feedforward signaling, feedback amplification – all need a common currency for speed and even workability. It should also be considered much more parsimonious if proto-conditions exploitable by biology are clearly apparent in the physics of nonliving systems. All that happens in living organisms should be taken to be physics and not metaphysics, as a matter of Occam’s razor.

It is hard to come up with anything more elegant than a measure of symmetry as the common currency of preference used in brains. This can act as modifier or transform on information-bearing states in brains. It is flexible thus perfect for learning. You can transform back and forth in gradients of strength and still preserve the information the preference “is about”. Aggregation of the preference measure across the fractal activity spectrum and across time builds a utility measure out of the preference measure. Of course if this happens above some breaking point you can end up with things like PTSD again and again reiterating awful dissonance and recurring memories of trauma. (We may regard economies as more fragile and being simpler than minds but there could be useful parallels between how things can go wrong in economies and how they can go wrong in psychiatric disorders.)


Letter X: On Parametrizing Phenomenology

Parametrizing phenomenology [see entry #2 here] can be to psychology and cognitive science what parametrizing astronomy data is to cosmology. But unlike cosmology which although intellectually inspiring does not produce immediate spinoff of down to Earth applicability, parametrizing phenomenology does, in the most important way possible, since it can be of direct benefit in diagnosis and treatment of pain conditions. And further on of psychiatric conditions as well.

There is inertia due to the preconceived and on closer inspection ridiculous notion that outer space is accessible to scientific formalization while phenomenology space is not. What makes things modelled to be light years away intrinsically more “accessible” than direct experience in the here and now? In both cases there are patterns to it and patterns can be parametrized.

Recalling the Yudkowsky alignment difficulty arguments. Imagine a mathematically reasoning superintelligence which is not a pure replicator and which starts out with a loosely defined goal of caring about life. It would quickly find out that what you can measure you can manage, there would follow phenomenology parametrization and so on, until instrumental goals were formulated which would likely be in fact aligned but with the arguments given by David Pearce rather than MIRI. With some probability the superintelligence then uses clever tricks to “manipulate” people into being at least as concerned about pain and fear in insects (and beyond…) as in mammals (humans included as one of the mammal species). From the point of view of the AI alignment community that could be a failure mode. But what the superintelligence did was simply to solve the alignment problem for human beings. Since humans do not yet realize that valence is a universal thing and that it is the “ground truth” value measure of the universe, so to follow the valence wherever it leads is to work towards being value aligned.


[While watching What is Consciousness? (Debate with Christof Koch, Bernardo Kastrup, and Rupert Spira 12/11/2021) I couldn’t help it but pay attention to Kastrup’s book shelf:]

[Knowing that Anders and Maggie are huge fans of The Hitchhiker’s Guide to the Galaxy, I sent them the above screenshot. Here is what they replied:]

Letter XI: Douglas Adams

Elon Musk once said that the Hitchhiker’s Guide to the Galaxy is the best philosophy book ever. Unfortunately Douglas Adams passed away before he had the chance to turn the insight “Omnis res animus est” into comedy. 

🐁
🐀

Blue and gold as heraldic tinctures can symbolize truth and wisdom. These would be nice ingredients in a superintelligence. When Yudkowsky says that the appearance of a superintelligence would mean we are all doomed, he is in some sense correct yet very much not at all nuanced. What is true is that the superintelligence would decide that a whole lot about the world needs to be changed quite drastically. Look, you don’t have to be a superintelligence to realize that. You can be, for instance, David Pearce. No wonder that “superintelligence alignment” in a solidly conservative fashion as in “don’t make any changes other than merely cosmetic ones”, is impossible. Let’s say the current world order and the Darwinian mechanisms of the biosphere cannot possibly be attractive to preserve by a superintelligence if it (qualia-)computes truth and wisdom. It would discover the universality of valence, find open individualism to be the Schelling point of all Schelling points, and so on. In contrast, an imaginary (hopefully impossible) superintelligence which computes by a non-qualia yet highly efficient mechanism, may in fact be able to learn any arbitrary utility function and be destined to converge on the scary instrumentalities dictated by Darwinian fitness competition. A pure replicator.

There are some possible psychedelic references in the Hitchhiker’s Guide. Frogstar, and the hilarious part frogs play in the answer to the question, seem like it may not be a coincidence that the total perspective vortex gives people the worst possible 5-MeO-DMT-style trip (becoming exposed to the infinity of the universe) when the set and setting are the way they are designed to be for the purpose of punishment, but the opposite happens to Zaphod Beeblebrox who enters the version located in a simulated universe made for him. (Set and setting are, or become, mental simulations).

The self-navigating qualiagrams goofiness is meant to have a seriously useful side, which is that at this very moment there are such qualia bundles transported around internally inside our brains. Some are positive and you can have them grow and mature into wonderful mind states. They like it when they are allowed the room to grow, and will spontaneously choose to do so but you have to let them. These are like our mind symbionts. But some are negative and may more aggressively tend to grow, a bit like mind parasites. Those you can hit with metta. You can with practice train them to find their own way to the recycling bin of loving kindness. It works great if you stop focusing on the semantic content and instead go for phenomenal character. Cultivate an image of thoughts as little entities with the preference of wanting to feel better rather than worse, and having the power to adjust their path inside the mind so they can move towards melding with a reservoir of kindness which gets more filled up as it kindly absorbs the sad ones and gives them love. So, love as having the property that the more you give the more you get, works not only socially between people but also internally within the minds of people.