Qualia Production Presents: “The Seven Seals of Security” (and Other Communications from QRI Sweden)

By Maggie Wassinge and Anders Amelin (now QRI Sweden and HR helpers; see previous letters)

CosmicImpact

Jewelry by Anders and Maggie (see: Quantifying Bliss for the reference to “C, D, N”)

Hi everyone!

It’s Anders and Maggie in Stockholm, Sweden, here. Volunteers in human resource coordination for the QRI.

We would like to hereby announce our commitment to donate fifty thousand dollars to the Qualia Research Institute for research related to the mathematical modeling of phenomenological valence.

We are pretty much just your ordinary Swedish transhumanist couple. With a passion for finding out from first principles how things work. We whole-heartedly agree with Elon Musk that at the end of the day, excellence is the only passing grade. Over the last couple of years we have arrived at the solid conclusion that the biggest bonanza in effective altruism could only be realized by first of all solving valence. Symbolically, in comedy form, this is like first spending the necessary computational resources to arrive at the conciseness of 42 as “the answer”, before it can be determined what the right questions must then be. In our book it is with no doubt the Qualia Research Institute which corresponds to “Deep Thought” in what Elon Musk has called “the best philosophy book ever”: The Hitch-Hiker’s Guide to the Galaxy. Seriously here, it is advisable to balance this with a bit of David Pearce also, but indeed we do believe an encouraging “Don’t Panic” is in fact compatible with the laws of nature in this universe. Immense reward is there for those who roll up their sleeves and start working systematically from first principles.

But again, excellence is the only passing grade. The universe is no picnic. It is a field of seemingly infinite potentialities, all of which are open to exploration and exploitation. It is still unknown what the proto-states of sentience are intrinsically like, but it is clear that biological evolution works as an optimization engine for valence polarization. A “passing grade” for a long-term sustainable and prospering technological civilization must involve a universally global first-principles solution to the horrific downside of this: suffering. That solution must be combined with optimal development of the state space of positive valence and intelligence. It seems plausible that experienced negative valence is a computationally economic way for evolution to drive behavior when the implementation is in biochemistry. However, information processing can also be done non-consciously, and it stands to reason that all the informational saliency achieved via negative valence experience can instead be had via non-conscious processes which would be available to future suitably modified embodiments of intelligence.

The QRI is the only real player in this game so far, as our civilization takes its first baby steps towards maturity. In the domain of effective altruism, the Qualia Research Institute today corresponds to what bitcoin was when first launched. The difference is that the QRI doesn’t just promise to be a novel medium of exchange, but a novel competence about the first principles of well-being!

During the couple of years it took for us to come to the above conclusions, we set aside every penny we could spare. That became the fifty thousand we are now committing to the QRI. If enough others with the same visions were to do the same thing, soon enough it could begin adding up to real money. Money which at this foundational stage stands at quite a favorable exchange rate with respect to the ultimate currency of the universe: positive emotional valence.

Infinite bliss to everyone from a couple of Scandinavian old-timers!HEA-2020-01-19

HEA-LENA

High Entropy Alloy (Al + Ti + Cr + Mo + W) and Low-Entropy Non-Alloy (Ti) – made by Anders and Maggie. If non-materialist physicalist idealism (i.e. panpsychism that respects physics) is true, what do these bundles of baryonic matter feel like from the inside?


Letter IV: On Psychonautics

Psychedelic trippers put effort into trying to interpret what it all means ontologically. Plant spirits may be at work, or one taps into the collective unconscious or is simulated by some alien superintelligence.

The QRI could perhaps guide interested psychonauts in the direction of writing more scientifically productive reports.

A scientifically minded tripper needs to start with the realization that human beings are perfect psychonauts because our brains have an enormous excess capacity over what is minimally required to perform any one of the tasks that we do in everyday waking life. The highly unusual aspect of human brains is that they can produce general intelligence. This is rare in nature but when you have it, you assume it to be the normal state of affairs.

Trippers are often in disbelief over the ability of human brains to produce the fantastic content of psychedelic experiences. As if there is suddenly a superpower there which one never uses when sober. How can that be? It must be something supernatural going on, right? Actually, no. Not that we should rule out the “supernatural” a priori but it is not necessary.

The human mind uses a superpower all the time. One which is hidden in plain view, we might say. It is the superpower of selecting from a huge range of possibilities for what the mind could be doing, and homing in on exactly the one choice in every moment that is most appropriate right then and there. When those tight constraints are relaxed the human brain becomes a system which can explore far and wide in qualia state-space.

Intelligence is a phenomenon which uses multiple optimization points to converge on some invariance. At the theoretical efficiency maximum this takes surprisingly (to us) little raw processing power. A jumping spider does not display less strategic and tactical intelligence than a human does when hunting. The spider’s neural network is very much smaller than the human’s but the evolutionary fitness search available for evolving small, numerous and quickly reproducing creatures is much larger than for animals like us. For us it is not so much a question of evolution having optimized what every cell does, but one of having added more and more cells to increase overall performance.

The spider’s brain probably contains far less sub-optimal “spaghetti code” than the human’s. It is possible that the spider has access to exquisitely fine-tuned qualia for the crucial task of sneaking up on big, highly dangerous prey and bringing it down without botching the job. On the other hand, there might not be much opportunity for spiders to evolve general intelligence since they have already done away with everything that is “useless” for their sober everyday lives.

Friendly-jumping-spider-Thomas-Shahan-17exizc

“Does my brain contain less spaghetti code than yours?”

A human brain is a mass of excitation-inhibition “spaghetti” which defies belief. An almost ultimate jack of all trades but master of none which cannot quite produce the hunting skills of the spider but can instead do a billion other things that the spider could not even in principle learn how to do.

It is the billion other things that we could do but don’t, which is the human superpower, not the few things that we actually do on a sober basis. This is a power which can be harnessed for psychonautics. You’ve got an inner-space warp drive in your head. Aptly named. 🖖


Letter V: Exciting Research Leads

Here are some suggestions for titles of essays and research papers the QRI could write if we had the resources.

  1. “Alloy, anneal, quench and temper: Forging a blade to cut mind at the joints”
  2. “Play me like a violin: A compressibility analysis of neuro-acoustic patterns captured during person to person interaction”
  3. “Leadership and consonance: Aggregate neuro-acoustic compressibility as a proxy for computational efficiency of human group intelligence”
  4. “Neural annealing through laughter: Neuro-acoustics of humor as a factor for healthy mental adjustment”
  5. “The tree of music: An annealable branching tuning-fork model for nervous systems”
  6. “Same but different: Suggesting a qualia analogue for the comparative planetology of Earth and Titan”
  7. “Music of life: Consonance, dissonance, noise and symmetry as explanatory elements for evolution from single cells to human minds”
  8. “Compartments of harmony bounded by dissonance: A neuro-acoustic model of domain specificity in cognition”

The Seven Seals of Security or Safety Through Uncertainty – Transhumanist Satire

This slideshow requires JavaScript.


Letter VI: Earth as an Engine of Qualia Diversity

Handwaving Johnson & Gómez-Emilsson’s law about the surprisingly large size of qualia state space:

Presume that consciousness and matter are interconnected information structures. Can any useful parallels be drawn from the matter domain of outer space to the consciousness domain of inner space? Consider that planets, as a group, are subject to variation and (anthropic) selection. An interconnection point is provided by observation selection: Certain planetary properties far from the universe median are going to be found by intelligent conscious observers for their own planet of origin. A small subset of conscious observers are the ones who, like humans, have general intelligence and broad curiosity. Those observers are the few who observe more and more aspects of their own planet as well as adjacent space and the state space of matter at large, and ultimately perhaps also of consciousness at large. The evolutionary reproductive selection of such observers is not the default condition of all life but rather it is conditional upon even more unusual properties of their planet of origin than for the average life-bearing planet.

Conclusion: Earth is likely to be a highly unusual planet, and human consciousness is likely to be a highly unusual seat of experience. They are causally linked. A structural property they share could be a high level of diversity but never reaching cosmically global extremes on any single parameter. A Jill of all trades planet is married to a Jack of all trades mind.

While fairly good at impressively many trades, Jack and Jill are master and mistress of none. For a tentative and very loose analogy which may be better than nothing, let’s say planet Earth is like the human mind. The other planets in the Solar System are like altered human minds and some animal minds. Some basic properties like gravity, roundness and rotation are common to all the planets. Corresponding to suggested basic features of biologically evolved sentience, such as valence and some sensory modalities.

Then we follow Slartibartfast to the fjords of Norway. Here we see how Earth differs in diversity compared with the other “animals”. The planet’s surface is an energetic 3-phase regime. Solid crust, liquid water and solid water under highly dynamic conditions. Not widely separated like on Europa but forming extended areas of contact where unusual complexity emerges. It’s worth an award, really. (No, not Belgium…).

Human cognition is like Earth with its’ coasts and mountain ranges. A “just right” quantity and proportionality of ingredients is what allowed self-organization of Earth’s environmental complexity and its’ endurance over time via the mechanism of prolonged core solidification and plate tectonics. An unusual state of affairs in nature. It’s not unexpected in principle, only rare in actual existence. The same may go for evolution of the general cognition accessible to human minds.

A type of mind which is generally competent over multiple domains of agency cannot function as such if not many crucial parameters in its’ architecture fall within a tight range of “just right and not too much nor too little”. Or, in Swedish, “lagom”. If you loosen that constraint, such as by ingesting 5 grams of mushrooms blindfolded, your mind will clearly no longer function on your job or even in your body. But in exchange for giving up on that functionality as agent, you can max out on stuff like… well, it’s beyond words.

General intelligence is not compatible with an easy achievement of extreme states of consciousness, though as a less frequently added mental ingredient for a group intelligence (like human hunter-gatherers) extreme states can be hypothesized to enhance abilities of that group intelligence.

But what does the current human “master of none” in qualia rendering imply for the future of consciousness, and what about cosmic matter beyond the neighboring planets?

Beyond the Solar System we find many types of stars, black holes, dark matter and various ultra-thin, ultra-dense, ultra-cold, or ultra-hot configurations of matter in the wider domain of spacetime. Nature usually has not developed nonliving matter into configurations with even remotely as high a complexity as for living matter, simply because of no evolutionary selection pressures. Some nonbiological matter objects could be strong qualia generators just by chance, though. The Sun comes speculatively and punlessly to mind. Doing an IIT and a CDNS analysis on its’ surface magnetized plasma wave patterns may not be entirely far-fetched. But the big promise for expanding the diversity of actualized sentience comes through engineering. Jack and Jill is the couple who can pull that off, and their offspring can then grow up in that fabulous new landscape of experience. For they can become masters and mistresses. Dominatrices, even. The reason being that while the parents are tightly constrained experientially, the kids need not be.

For an efficiently organized advanced technological civilization, the constraints of being a highly general and resilient intelligence can be placed high up on the group level. Individual seats of experience with the sizes of today’s human or animal brains, say, can then be allowed to render experiential states more specialized to feel meaningful, enjoyable and worthwhile. (A dystopian version could instead generate unimaginable suffering, of course. Need to watch out…).

Earth is by far the most diverse planet in the Solar System, but it does not have the deepest ocean, the tallest mountain, the highest gravity, the hottest days, the most explosive volcanoes or the most intense thunderstorms. Human minds who have only experienced their evolved biologically functional mental states have not reached the consciousness state space equivalents of the extreme environments on the other planets. They have never snowboarded down the tellurobismutite condensate slopes on Maxwell Montes or been ejected on a ballistic trajectory by a sulfur dioxide plume from Tvashtar Patera. These things may be comparable to being a bat or taking psilocybin. As different from sober human experience as they are, they still merely hint at the range of possible experiences in the qualia state space opening up beyond. If all goes well, there will be psychonauts of the future who are children of Earth and able to engineer any form of matter and energy into conscious brain architectures. They would become what Max Tegmark has called “Life 3.0”.

Either that or, in a hopefully not terribly more likely scenario portrayed by imagined future historians, humanity stayed obsessed with the circulation of money to the detriment of all else.

This planet has – or rather had – a problem, which was this: most of the people living on it were unhappy for pretty much of the time. Many solutions were suggested for this problem, but most of these were largely concerned with the movement of small green pieces of paper, which was odd because on the whole it wasn’t the small green pieces of paper that were unhappy.” ― Douglas Adams, The Hitchhiker’s Guide to the Galaxy 🌎


bsj0p7j1yl821

Earth as an Engine of Qualia Diversity

Qualia Computing at: TSC 2020, IPS 2020, unSCruz 2020, and Ephemerisle 2020

[March 12 2020 update: Both TSC and IPS are being postponed due to the coronavirus situation. At the moment we don’t know if the other two events will go ahead. I’ll update this entry when there is a confirmation either way].


These are the 2020 events lined up for me at the moment (though more are likely to pop up):

  • I will be attending The Science of Consciousness 2020 from the 13th to the 17th of April representing the Qualia Research Institute (QRI). I will present about a novel approach for solving the combination problem for panpsychism. The core idea is to use the concept of topological segmentation in order to explain how the universal wavefunction can develop boundaries with causal power (and thus capable of being recruited by natural selection for information-processing purposes) which might also be responsible for the creation of discrete moments of experience. I am including the abstract in this post (see below).
  • I will then fly out to Boston for the Intercollegiate Psychedelics Summit (IPS) from the 18th to the 20th of April (though I will probably stay for a few more days in order to meet people in the area). Here I will be presenting about intelligent strategies for exploring the state-space of consciousness.
  • At the end of April I will be attending the 2020 Santa Cruz Burning Man Regional (“unSCruz“) with a small contingent of members and friends of QRI. We will be showcasing some of our neurotech prototypes and conducting smell tests (article about this coming soon).
  • And from the 20th to the 27th of July I will be at Ephemerisle 2020 alongside other members of QRI. We will be staying on the “Consciousness Boat” and showcasing some interesting demos. In particular, expect to see new colors, have fully-sober stroboscopic hallucinations, and explore the state-space of visual textures.

I am booking some time in advance to meet with Qualia Computing readers, people interested in the works of the Qualia Research Institute, and potential interns and visiting scholars. Please message me if you are attending any of these events and would like to meet up.


Here is the abstract I submitted to TSC 2020:

Title – Topological Segmentation: How Dynamic Stability Can Solve the Combination Problem for Panpsychism

Primary Topic Area – Mental Causation and the Function of Consciousness

Secondary Topic Area – Panpsychism and Cosmopsychism

Abstract – The combination problem complicates panpsychist solutions to the hard problem of consciousness (Chalmers 2013). A satisfactory solution would (1) avoid strong emergence, (2) sidestep the hard problem of consciousness, (3) prevent the complications of epiphenomenalism, and (4) be compatible with the modern scientific world picture. We posit that topological approaches to the combination problem of consciousness could achieve this. We start by assuming a version of panpsychism in which quantum fields are fields of qualia, as is implied by the intrinsic nature argument for panpsychism (Strawson 2008) in conjunction with wavefunction realism (Ney 2013). We take inspiration from quantum chemistry, where the observed dynamic stability of the orbitals of complex molecules requires taking the entire system into account at once. The scientific history of models for chemical bonds starts with simple building blocks (e.g. Lewis structures), and each step involves updating the model to account for holistic behavior (e.g. resonance, molecular orbital theory, and the Hartree-Fock method). Thus the causal properties of a molecule are identified with the fixed points of dynamic stability for the entire atomic system. The formalization of chemical holism physically explains why molecular shapes that create novel orbital structures have weak downward causation effect on the world without needing to invoke strong emergence. For molecules to be “natural units” rather than just conventional units, we can introduce the idea that topological segmentation of the wavefunction is responsible for the creation of new beings. In other words, if dynamical stability entails the topological segmentation of the wavefunction, we get a story where physically-driven behavioral holism is accompanied with the ontological creation of new beings. Applying this insight to solve the combination problem for panpsychism, each moment of experience might be identified with a topologically distinct segment of the universal wavefunction. This topological approach makes phenomenal binding weakly causally emergent along with entailing the generation of new beings. The account satisfies the set of desiderata we started with: (1) no strong emergence is required because behavioral holism is implied by dynamic stability (itself only weakly emergent on the laws of physics), (2) we sidestep the hard problem via panpsychism, (3) phenomenal binding is not epiphenomenal because the topological segments have holistic causal effects (such that evolution would have a reason to select for them), and (4) we build on top of the laws of physics rather than introduce new clauses to account for what happens in the nervous system. This approach to the binding problem does not itself identify the properties responsible for the topological segmentation of the universal wavefunction that creates distinct moments of experience. But it does tell us where to look. In particular, we posit that both quantum coherence and entanglement networks may have the precise desirable properties of dynamical stability accompanied with topological segmentation. Hence experimental paradigms such as probing the CNS at femtosecond timescales to find a structural match between quantum coherence and local binding (Pearce 2014) could empirically validate our solution to the combination problem for panpsychism.

paste


See Also:

One for All and All for One

By David Pearce (response to Quora question: “What does David Pearce think of closed, empty, and open individualism?“)


Vedanta teaches that consciousness is singular, all happenings are played out in one universal consciousness and there is no multiplicity of selves.

 

– Erwin Schrödinger, ‘My View of the World’, 1951

Enlightenment came to me suddenly and unexpectedly one afternoon in March [1939] when I was walking up to the school notice board to see whether my name was on the list for tomorrow’s football game. I was not on the list. And in a blinding flash of inner light I saw the answer to both my problems, the problem of war and the problem of injustice. The answer was amazingly simple. I called it Cosmic Unity. Cosmic Unity said: There is only one of us. We are all the same person. I am you and I am Winston Churchill and Hitler and Gandhi and everybody. There is no problem of injustice because your sufferings are also mine. There will be no problem of war as soon as you understand that in killing me you are only killing yourself.

 

– Freeman Dyson, ‘Disturbing the Universe’, 1979

Common sense assumes “closed” individualism: we are born, live awhile, and then die. Common sense is wrong about most things, and the assumption of enduring metaphysical egos is true to form. Philosophers sometimes speak of the “indiscernibility of identicals”. If a = b, then everything true of a is true of b. This basic principle of logic is trivially true. Our legal system, economy, politics, academic institutions and personal relationships assume it’s false. Violation of elementary logic is a precondition of everyday social life. It’s hard to imagine any human society that wasn’t founded on such a fiction. The myth of enduring metaphysical egos and “closed” individualism also leads to a justice system based on scapegoating. If we were accurately individuated, then such scapegoating would seem absurd.

Among the world’s major belief-systems, Buddhism comes closest to acknowledging “empty” individualism: enduring egos are a myth (cf. “non-self” or Anatta – Wikipedia). But Buddhism isn’t consistent. All our woes are supposedly the product of bad “karma”, the sum of our actions in this and previous states of existence. Karma as understood by Buddhists isn’t the deterministic cause and effect of classical physics, but rather the contribution of bad intent and bad deeds to bad rebirths.

Among secular philosophers, the best-known defender of (what we would now call) empty individualism minus the metaphysical accretions is often reckoned David Hume. Yet Hume was also a “bundle theorist”, sceptical of the diachronic and the synchronic unity of the self. At any given moment, you aren’t a unified subject (“For my part, when I enter most intimately into what I call myself, I always stumble on some particular perception or other, of heat, cold, light or shade, love or hatred, pain or pleasure. I can never catch myself at any time without a perception, and can never observe anything but the perception” (‘On Personal Identity’, A Treatise of Human Nature, 1739)). So strictly, Hume wasn’t even an empty individualist. Contrast Kant’s “transcendental unity of apperception”, aka the unity of the self.

An advocate of common-sense closed individualism might object that critics are abusing language. Thus “Winston Churchill”, say, is just the name given to an extended person born in 1874 who died in 1965. But adhering to this usage would mean abandoning the concept of agency. When you raise your hand, a temporally extended entity born decades ago doesn’t raise its collective hand. Raising your hand is a specific, spatio-temporally located event. In order to make sense of agency, only a “thin” sense of personal identity can work.

According to “open” individualism, there exists only one numerically identical subject who is everyone at all times. Open individualism was christened by philosopher Daniel Kolak, author of I Am You (2004). The roots of open individualism are ancient, stretching back at least to the Upanishads. The older name is monopsychism. I am Jesus, Moses and Einstein, but also Hitler, Stalin and Genghis Khan. And I am also all pigs, dinosaurs and ants: subjects of experience date to the late Pre-Cambrian, if not earlier.

My view?
My ethical sympathies lie with open individualism; but as it stands, I don’t see how a monopsychist theory of identity can be true. Open or closed individualism might (tenuously) be defensible if we were electrons (cfOne-electron universe – Wikipedia). However, sentient beings are qualitatively and numerically different. For example, the half-life of a typical protein in the brain is an estimated 12–14 days. Identity over time is a genetically adaptive fiction for the fleetingly unified subjects of experience generated by the CNS of animals evolved under pressure of natural selection (cfWas Parfit correct we’re not the same person that we were when we were born?). Even memory is a mode of present experience. Both open and closed individualism are false.

By contrast, the fleeting synchronic unity of the self is real, scientifically unexplained (cfthe binding problem) and genetically adaptive. How a pack of supposedly decohered membrane-bound neurons achieves a classically impossible feat of virtual world-making leads us into deep philosophical waters. But whatever the explanation, I think empty individualism is true. Thus I share with my namesakes – the authors of The Hedonistic Imperative (1995) – the view that we ought to abolish the biology of suffering in favour of genetically-programmed gradients of superhuman bliss. Yet my namesakes elsewhere in tenselessly existing space-time (or Hilbert space) physically differ from the multiple David Pearces (DPs) responding to your question. Using numerical superscripts, e.g. DP^564356, DP^54346 (etc), might be less inappropriate than using a single name. But even “DP” here is misleading because such usage suggests an enduring carrier of identity. No such enduring carrier exists, merely modestly dynamically stable patterns of fundamental quantum fields. Primitive primate minds were not designed to “carve Nature at the joints”.

However, just because a theory is true doesn’t mean humans ought to believe in it. What matters are its ethical consequences. Will the world be a better or worse place if most of us are closed, empty or open individualists? Psychologically, empty individualism is probably the least emotionally satisfying account of personal identity – convenient when informing an importunate debt-collection company they are confusing you with someone else, but otherwise a recipe for fecklessness, irresponsibility and overly-demanding feats of altruism. Humans would be more civilised if most people believed in open individualism. The factory-farmed pig destined to be turned into a bacon sandwich is really youthe conventional distinction between selfishness and altruism collapses. Selfish behaviour is actually self-harming. Not just moral decency, but decision-theoretic rationality dictates choosing a veggie burger rather than a meat burger. Contrast the metaphysical closed individualism assumed by, say, the Less Wrong Decision Theory FAQ. And indeed, all first-person facts, not least the distress of a horribly abused pig, are equally real. None are ontologically privileged. More speculatively, if non-materialist physicalism is true, then fields of subjectivity are what the mathematical formalism of quantum field theory describes. The intrinsic nature argument proposes that only experience is physically real. On this story, the mathematical machinery of modern physics is transposed to an idealist ontology. This conjecture is hard to swallow; I’m agnostic.

Bern, 20. 5. 2003 Copyright Peter Mosimann: Kuppel

One for all, all for one” – unofficial motto of Switzerland.

Speculative solutions to the Hard Problem of consciousness aside, the egocentric delusion of Darwinian life is too strong for most people to embrace open individualism with conviction. Closed individualism is massively fitness-enhancing (cfAre you the center of the universe?). Moreover, temperamentally happy people tend to have a strong sense of enduring personal identity and agency; depressives have a weaker sense of personhood. Most of the worthwhile things in this world (as well as its biggest horrors) are accomplished by narcissistic closed individualists with towering egos. Consider the transhumanist agenda. Working on a cure for the terrible disorder we know as aging might in theory be undertaken by empty individualists or open individualists; but in practice, the impetus for defeating death and aging comes from strong-minded and “selfish” closed individualists who don’t want their enduring metaphysical egos to perish. Likewise, the well-being of all sentience in our forward light-cone – the primary focus of most DPs – will probably be delivered by closed individualists. Benevolent egomaniacs will most likely save the world.

One for all, all for one”, as Alexandre Dumas put it in The Three Musketeers?
Maybe one day: full-spectrum superintelligence won’t have a false theory of personal identity. “Unus pro omnibus, omnes pro uno” is the unofficial motto of Switzerland. It deserves to be the ethos of the universe.

main-qimg-46d38d2ebcea7325a3f29f7ec454096b

Breaking Down the Problem of Consciousness

Below you will find three different breakdowns for what a scientific theory of consciousness must be able to account for, formulated in slightly different ways.

First, David Pearce posits these four fundamental questions (the simplicity of this breakdown comes with the advantage that it might be the easiest to remember):

  1. The existence of consciousness
  2. The causal and computational properties of experience (including why we can even talk about consciousness to begin with, why consciousness evolved in animals, etc.)
  3. The nature and interrelationship between all the qualia varieties and values (why does scent exist? and in exactly what way is it related to color qualia?)
  4. The binding problem (why are we not “mind dust” if we are made of atoms)
david_pearce_criteria_for_scientific_theory_of_consciousness

David Pearce’s Four Questions Any Scientific Theory of Consciousness Must Be Able to Answer


Second, we have Giulio Tononi‘s IIT:

  1. The existence of consciousness
  2. The composition of consciousness (colors, shapes, etc.)
  3. Its information content (the fact each experience is “distinct”)
  4. The unity of consciousness (why does seeing the color blue does not only change a part of your visual field, but in some sense it changes the experience as a whole?)
  5. The borders of experience (also called ‘exclusion principle’; that each experience excludes everything not in it; presence of x implies negation of ~x)
Axioms_and_postulates_of_integrated_information_theory

Giulio Tononi’s 5 Axioms of Consciousness


Finally, Michael Johnson breaks it down in terms of what he sees as a set of what ultimately are tractable problems. As a whole the problem of consciousness may be conceptually daunting and scientifically puzzling, but this framework seeks to paint a picture of what a solution should look like. These are:

  1. Reality mapping problem (what is the formal ontology that can map reality to consciousness?)
  2. Substrate problem (in such an ontology, which objects and processes contribute to consciousness?)
  3. Boundary problem (akin to the binding problem, but reformulated to be agnostic about an atomistic ontology of systems)
  4. Scale problem (how to connect the scale of our physical ontology with the spatio-temporal scale at which experiences happen?)
  5. Topology of information problem (how do we translate the physical information inside the boundary into the adequate mathematical object used in our formalism?)
  6. State-space problem (what mathematical features does each qualia variety, value, and binding architecture correspond to?)
  7. Translation problem (starting with the mathematical object corresponding to a specific experience within the correct formalism, how do you derive the phenomenal character of the experience?)
  8. Vocabulary problem (how can we improve language to talk directly about natural kinds?)
Eight-Problems2

Michael Johnson’s 8 Problems of Consciousness

Each of these different breakdowns have advantages and disadvantages. But I think that they are all very helpful and capable of improving the way we understand consciousness. While pondering about the “hard problem of consciousness” can lead to fascinating and strange psychological effects (much akin to asking the question “why is there something rather than nothing?”), addressing the problem space at a finer level of granularity almost always delivers better results. In other words, posing the “hard problem” is less useful than decomposing the question into actually addressable problems. The overall point being that by doing so one is in some sense actually trying to understand rather than merely restating one’s own confusion.

Do you know of any other such breakdown of the problem space?


27983216_842901315900851_2411647783839521476_o

Psychotic Depression

Excerpt from Infinite Jest by David Foster Wallace (pgs. 692-698)

And re Ennet House resident Kate Gompert and this depression issue:

Some psychiatric patients — plus a certain percentage of people who’ve gotten so dependent on chemicals for feelings of well-being that when the chemicals have to be abandoned they undergo a loss-trauma that reaches way down deep into the soul’s core systems — these persons know firsthand that there’s more than one kind of so-called ‘depression’. One kind is low-grade and sometimes gets called anhedonia[280] or simple melancholy. It’s a kind of spiritual torpor in which one loses the ability to feel pleasure or attachment to things formerly important. The avid bowler drops out of his league and stays home at night staring dully at kick-boxing youtube videos cartridges. The gourmand is off his feed. The sensualist finds his beloved Unit all of a sudden to be so much feelingless gristle, just hanging there. The devoted wife and mother finds the thought of her family about as moving, all of a sudden, as a theorem of Euclid. It’s a kind of emotional novocaine, this form of depression, and while it’s not overly painful its deadness is disconcerting and… well, depressing. Kate Gompert’s always thought of this anhedonic state as a kind of radical abstracting of everything, a hollowing out of stuff that used to have affective content. Terms that undepressed toss around and take for granted as full and fleshy — happiness, joie de vivre, preference, love — are stripped to their skeletons and reduced to abstract ideas. They have, as it were, denotation but not connotation. The anhedonic can still speak about happiness and meaning et al., but she has become incapable of feeling anything in them, of understanding anything about them, of hoping anything about them, or of believing them to exist as anything more than concepts. Everything becomes an outline of the thing. Objects become schemata. The world becomes a map of the world. An anhedonic can navigate, but has no location. I.e. the anhedonic becomes, in the lingo of Boston AA, Unable To Identify.

It’s worth nothing that, among younger E.T.A.s, the standard take on Dr. J. O. Incandenza’s suicide attributes his putting his head in the microwave to this kind of anhedonia. This is maybe because anhedonia’s often associated with the crises that afflict extremely goal-oriented people who reach a certain age having achieved all or more than all than they’d hoped for. The what-does-it-all-mean-type crisis of middle-aged Americans. In fact this is in fact not what killed Incandenza at all. In fact the presumption that he’d achieved all his goals and found that the achievement didn’t confer meaning or joy on his existence says more about the students at E.T.A. than it says about Orin’s and Hal’s father: still under the influence of the deLint-like carrot-and-stick philosophies of their hometown coaches rather than the more paradoxical Schtitt/Incandenza/Lyle school, younger athletes who can’t help gauging their whole worth by their place in an ordinal ranking use the idea that achieving their goals and finding the gnawing sense of worthlessness still there in their own gut as a kind of psychic bogey, something that they can use to justify stopping on their way down to dawn drills to smell flowers along the E.T.A. paths. The idea that achievement doesn’t automatically confer interior worth is, to them, still, at this age, an abstraction, rather like the prospect of their own death — ‘Caius Is Mortal’ and so on. Deep down, they all still view the competitive carrot as the grail. They’re mostly going through the motions when they invoke anhedonia. They’re mostly small children, keep in mind. Listen to any sort of sub-16 exchange you hear in the bathroom or food line: ‘Hey, there, how are you?’ ‘Number eight this week, is how I am.’ They all still worship the carrot. With the possible exception of the tormented LaMount Chu, they all still subscribe to the delusive idea that the continent’s second-ranked fourteen-year-old feels exactly twice as worthwhile as the continent’s #4.

Deluded or not, it’s still a lucky way to live. Even though it’s temporary. It may well be that the lower-ranked little kids at E.T.A. are proportionally happier than the higher-ranked kids, since we (who are mostly not small children) know it’s more invigorating to want than to have, it seems. Though maybe this is just the inverse of the same delusion.

Hal Incandenza, though he has no idea yet of why his father really put his head in a specially-dickied microwave in the Year of the Trial-Size Dove Bar, is pretty sure that it wasn’t because of standard U.S. anhedonia. Hal himself hadn’t had a bona fide intensity-of-interior-life-type emotion since he was tiny; he finds terms like joie and value to be like so many variables in rarified equations, and he can manipulate them well enough to satisfy everyone but himself that he’s in there, inside his own hull, as a human being — but in fact he’s far more robotic than John Wayne. One of his troubles with his Moms is the fact that Avril Incandenza believes she knows him inside and out as a human being, and an internally worthy one at that, when in fact inside Hal there’s pretty much nothing at all, he knows. His Moms Avril hears her own echoes inside him and thinks what she hears is him, and this makes Hal feel the one thing he feels to the limit, lately: he is lonely.

It’s of some interest that the lively arts of the millennial U.S.A. treat anhedonia and internal emptiness as hip and cool. It’s maybe the vestiges of the Romantic glorification of Weltschmerz, which means world-weariness or hip ennui. Maybe it’s the fact that most of the arts here are produced by world-weary and sophisticated older people and then consumed by younger people who not only consume art but study it for clues on how to be cool, hip — and keep in mind that, for kids and younger people, to be hip and cool is the same as to be admired and accepted and included and so Unalone. Forget so-called peer-pressure. It’s more like peer-hunger. No? We enter a spiritual puberty where we snap to the fact that the great transcendent horror is loneliness, excluded encagement in the self. Once we’ve hit this age, we will now give or take anything, wear any mask, to fit, be part-of, not be Alone, we young. The U.S. arts are our guide to inclusion. A how-to. We are shown how to fashion masks of ennui and jaded irony at a young age where the face is fictile enough to assume the shape of whatever it wears. And then it stuck there, the weary cynicism that saves us from gooey sentiment and unsophisticated naïvité. Sentiment equals naïvité on this continent (at least since the Reconfiguration). One of the things sophisticated viewers have always liked about the J. O. Incandenza’s The American Century as Seen Through a Brick is its unsubtle thesis that naïvité is the last true terrible sin in the theology of millennial America. And since sin is the sort of thing that can be talked about only figuratively, it’s natural that Himself’s dark little cartridge was mostly about a myth, viz. that queerly persistent U.S. myth that cynicism and naïvité are mutually exclusive. Hal, who’s empty but not dumb, theorizes privately that what passes for hip cynical transcendence of sentiment is really some kind of fear of being really human, since to be really human (at least as he conceptualizes it) is probably to be unavoidably sentimental and naïve and goo-prone and generally pathetic, is to be in some basic interior way forever infantile, some sort of not-quite-right-looking infant dragging itself anaclitically around the map, with big wet eyes and froggy-soft skin, huge skull, gooey drool. One of the really American things about Hal, probably, is the way he despises what it is he’s really lonely for: this hideous internal self, incontinent of sentiment and need, that pulses and writhes just under the hip empty mask, anhedonia.[281]

The American Century as Seen Through a Brick‘s main and famous key-image is of a piano-string vibrating — a high D, it looks like — vibrating and making a very sweet unadorned solo sound indeed, and then a little thumb comes into the frame, a blunt moist pale and yet dingy thumb, with disreputable stuff crusted in one of the nail-corners, small and unlined, clearly an infantile thumb, and as it touches the paino string the high sweet sound immediately dies. And the silence that follows is excruciating. Later in the film, after much mordant and didactic panoramic brick-following, we’re back at the piano-string, and the thumb is removed, and the high sweet sound recommences, extremely pure and solo, and yet now somehow, as the volume increases, now with something rotten about it underneath, there’s something sick-sweet and overripe and potentially putrid about the one clear high D as its volume increases and increases, the sound getting purer and louder and more dysphoric until after a surprisingly few seconds we find ourselves right in the middle of the pure undampered sound longing and even maybe praying for the return of the natal thumb, to shut it up.

Hal isn’t old enough yet to know that this is because numb emptiness isn’t the worst kind of depression. That dead-eyed anhedonia is but a remora on the ventral flank of the true predator, the Great White Shark of pain. Authorities term this condition clinical depression or involutional depression or unipolar dysphoria. Instead of just an incapacity for feeling, a deadening of the soul, the predator-grade depression Kate Gompert always feels as she Withdraws from secret marijuana is itself a feeling. It goes by many names — anguish, despair, torment, or q.v. Burton’s melancholia or Yevtuschenko’s more authoritative psychotic depression — but Kate Gompert, down in the trenches with the thing itself, knows it simply as It.

It is a level of psychic pain wholly incompatible with human life as we know it. It is a sense of radical and thoroughgoing evil not just as a feature but as the essence of conscious existence. It is a sense of poisoning that pervades the self at the self’s most elementary levels. It is a nausea of the cells and soul. It is an unnumb intuition in which the world is fully rich and animate and un-map-like and also thoroughly painful and malignant and antagonistic to the self, which depressed self It billows on and coagulates around and wraps in Its black folds and absorbs into Itself, so that an almost mystical unity is achieved with a world every constituent of which means painful harm to the self. Its emotional character, the feeling Gompert describes It as, is probably mostly indescribable except as a sort of double bind in which any/all of the alternatives we associate with human agency — sitting or standing, doing or resting, speaking or keeping silent, living or dying — are not just unpleasant but literally horrible.

It is also lonely on a level that cannot be conveyed. There is no way Kate Gompert could ever even begin to make someone else understand what clinical depression feels like, not even another person who is herself clinically depressed, because a person in such a state is incapable of empathy with any other living thing. This anhedonic Inability To Identify is also an integral part of It. If a person in physical pain has a hard time attending to anything except that pain[282], a clinically depressed person cannot even perceive any other person or thing as independent of the universal pain that is digesting her cell by cell. Everything is part of the problem, and there is no solution. It is a hell for one.

The authoritative term psychotic depression makes Kate Gompert feel especially lonely. Specifically the psychotic part. Think of it this way. Two people are screaming in pain. One of them is being tortured with electric current. The other is not. The screamer who’s being tortured with electric current is not psychotic: her screams are circumstantially appropriate. The screaming person who’s not being tortured, however, is psychotic, since the outside parties making the diagnoses can see no electrodes or measurable amperage. One of the least pleasant things about being psychotically depressed on a ward full of psychotically depressed patients is coming to see that none of them is really psychotic, that their screams are entirely appropriate to certain circumstances part of whose special charm is that they are undetectable by any outside party. Thus the loneliness: it’s a closed circuit: the current is both applied and received from within.

The so-called ‘psychotically depressed’ person who tries to kill herself doesn’t do so out of quote ‘hopelessness’ or any abstract conviction that life’s assets and debits do not square. And surely not because death seems suddenly appealing. The person in whom Its invisible agony reaches a certain unendurable level will kill herself the same way a trapped person will eventually jump from the window of a burning high-rise. Make no mistake about people who leap from burning windows. Their terror of falling from a great height is still just as great as it would be for you or me standing speculatively at the same window just checking out the view; i.e. the fear of falling remains constant. The variable here is the other terror, the fire’s flames: when the flames get close enough, falling to death becomes the slightly less terrible of the two terrors. It’s not desiring the fall; it’s terror of the flames. And yet nobody down on the sidewalk, looking up and yelling ‘Don’t!’ and ‘Hang on!’, can understand the jump. Not really. You’d have to personally be trapped and felt flames to really understand a terror way beyond falling.

But and so the idea of a person in the grip of It being bound by a ‘Suicide Contract’ some well-meaning Substance-abuse halfway house makes her sign is simply absurd. Because such a contract will constrain such a person only until the exact psychic circumstances that made the contract necessary in the first place assert themselves, invisibly and indescribably. That the well-meaning halfway-house Staff does not understand Its overriding terror will only make the depressed resident feel more alone.

One fellow psychotically depressed patient Kate Gompert came to know at Newton-Wellesley Hospital in Newton two years ago was a man in his fifties. He was a civil engineer whose hobby was model trains — like from Lionel Trains Inc., etc. — for which he erected incredibly intricate systems of switching and track that filled his basement recreation room. His wife brought photographs of the trains and network of trellis and track into the locked ward, to help remind him. The man said he had been suffering from psychotic depression for seventeen straight years, and Kate Gompert had had no reason to disbelieve him. He was stocky and swart with thinning hair and hands that he held very still in his lap as he sat. Twenty years ago he had slipped on a patch of 3-In-1-brand oil from his model-train tracks and bonked his head on the cement floor of his basement rec room in Wellesley Hills, and when he woke up in the E.R. he was depressed beyond all human endurance, and stayed that way. He’d never once tried suicide, though he confessed that he yearned for unconsciousness without end. His wife was very devoted and loving. She went to Catholic Mass every day. She was very devout. The psychotically depressed man, too, went to daily Mass when he was not institutionalized. He prayed for relief. He still had his job and his hobby. He went to work regularly, taking medical leaves only when the invisible torment got too bad for him to trust himself, or when there was some radical new treatment the psychiatrists wanted him to try. They’d tried Tricyclics, M.A.O.I.s, insulin-comas, Selective-Serotonin-Reuptake Inhibitors[283], the new and side-effect-laden Quadracyclics. They’d scanned his lobes and affective matrices for lesions and scars. Nothing worked. Not even high-amperage E.C.T. relieved It. This happens sometimes. Some cases of depression are beyond human aid. The man’s case gave Kate Gompert the howling fantods. The idea of this man going to work and to Mass and building miniaturized railroad networks day after day after day while feeling anything like what Kate Gompert felt in that ward was simply beyond her ability to imagine. The rationo-spiritual part of her knew this man and his wife must be possessed of a courage way off any sort of known courage-chart. But in her toxified soul Kate Gompert felt only a paralyzing horror at the idea of the squat dead-eyed man laying toy track slowly and carefully in the silence of his wood-panelled rec room, the silence total except for the sounds of the track being oiled and snapped together and laid into place, the man’s head full of poison and worms and every cell in his body screaming for relief from flames no one else could help with or even feel.

The permanently psychotically depressed man was finally transferred to a place on Long Island to be evaluated for a radical new type of psychosurgery where they supposedly went in and yanked out your whole limbic system, which is the part of the brain that causes all sentiment and feeling. The man’s fondest dream was anhedonia, complete psychic numbing. I.e. death in life. The prospect of radical psychosurgery was the dangled carrot that Kate guessed still gave the man’s life enough meaning for him to hang onto the windowsill by his fingernails, which were probably black and gnarled from the flames. That and his wife: he seemed genuinely to love his wife, and she him. He went to bed every night at home holding her, weeping for it to be over, while she prayed or did that devout thing with beads.

The couple had gotten Kate Gompert’s mother’s address and had sent Kate an Xmas card the last two years, Mr. and Mrs. Ernest Feaster of Wellesley Hills MA, stating that she was in her prayers and wishing her all available joy. Kate Gompert doesn’t know whether Mr. Ernest’s limbic system got yanked out or not. Whether he achieved anhedonia. The Xmas cards had had excruciating little watercolor pictures of locomotives on them. She could barely stand to think about them, even at the best of times, which the present was not.


[280] Anhedonia was apparently coined by Ribot, a Continental Frenchman, who in his 19th-century Psychologie des Sentiments says he means it to denote the psychoequivalent of analgesia, which is the neurologic supression of pain.

[281] This had been one of Hal’s deepest and most pregnant abstractions, one he’d come up with once while getting secretly high in the Pump Room. That we’re all lonely for something we don’t know we’re lonely for. How else to explain the curious feeling that he goes around feeling like he misses somebody he’s never even met? Without the universalizing abstraction, the feeling would make no sense.

[282] (the big reason why people in pain are so self-absorbed and unpleasant to be around)

[283] S.S.R.I.s, of which Zoloft and the ill-fated Prozac were the ancestors.



See also:

  • Wireheading Done Right – which steel-mans the case for philosophical and practical hedonism by outlining how to change our reward circuitry in such a way that (1) we never need to feel bad, (2) we move between different varieties of bliss depending on their functional properties, and (3) avoid becoming a pure replicator.
  • Tyranny of the Intentional Object – the stories we tell ourselves to explain every sense of pleasure and every sense of pain we feel are all, for the most part, deeply psychotic. That is, in so far as we attribute their valence to external triggers –  in truth, every feeling’s valence is the result of the fine-grained structure of the qualia that implements it. You don’t scream at a bullet ant’s sting. You scream at the deep scintillating patterns of qualia shaped by dissonance, shearing, pinching, tearing, etc. (all symmetry breaking effects) that result from the sting.
  • Logarithmic Scales of Pleasure and Pain – I like that David Foster Wallace makes a distinction between run-of-the-mill anhedonia and the Great White Shark of clinical depression. More broadly, we ought to realize that bad experiences of pain and suffering are not just a fraction worse than the rest, but they can indeed be orders of magnitude worse. On the flip-side, this is also true for positive experiences, with illustrative examples such as Buddhist Jhanas, temporal lobe epilepsy, and the lucky 5-MeO-DMT-induced state of supreme bliss.

Also relevant: David Pearce‘s Abolitionist Bioethics, Mike Johnson‘s A Future for Neuroscience, and Romeo Stevens‘ post about Core Transformation.

Finally, see Scott Alexander‘s book review of Infinite Jest.

Two Recent Presentations: (1) Hyperbolic Geometry of DMT Experiences, and (2) Harmonic Society

Here are two recent talks I gave. The first one is a talk about the Hyperbolic Geometry of DMT Experiences I gave at the Harvard Science of Psychedelics Club in mid-September (2019). And the second talk is about QRI‘s models of art, which took place in June (2019) at a QRI party in the Bay Area.


The Hyperbolic Geometry of DMT Experiences (@Harvard Science of Psychedelics Club)


Description

Andrés Gómez Emilsson from the Qualia Research Institute presents about the Hyperbolic Geometry of DMT Experiences.

At a high-level, this video presents an algorithmic reduction of DMT phenomenology which imports concepts from hyperbolic geometry and dynamic systems theory in order to explain the “weirder than weird” hallucinations one can have on this drug. Andrés describes what different levels of DMT intoxication feel like in light of a model in which experience has both variable geometric curvature and information content. The benefit of this model cashes out in a novel approach to design DMT experiences in order to maximize specific desired benefits.

See original article: The Hyperbolic Geometry of DMT Experiences: Symmetries, Sheets, and Saddled Scenes

And the Explain Like I’m 5 version: ELI5 “The Hyperbolic Geometry of DMT Experiences”

Presentation outline:

  • Thermometers of Experience
  • The Leaf Metaphor
  • Introduction to Hyperbolic Geometry
  • DMT Levels
  • Level 1: Threshold (& Symmetry Hotel)
  • Level 2: Chrysanthemum
  • Level 3: Magic Eye (& Crystal Worlds)
  • Level 4: Waiting Room
  • Level 5: Breakthrough
  • Level 6: Amnesia
  • Energy – Complexity Landscape
  • Dynamic Systems
  • Fixed Point
  • Limit Cycles
  • Chaos
  • Noise Driven Structures
  • Turbulence
  • Conclusion
  • Super-Shulgin Academy
  • Atman Retreat
  • Wrap-Up

About the speaker: Andrés studied Symbolic Systems at Stanford (and has a masters in Computational Psychology, also from Stanford). He has professional experience in data science engineering, machine learning, and affective science. His research at the Qualia Research Institute ranges from algorithm design, to psychedelic theory, to neurotechnology development, to mapping and studying the computational properties of consciousness. Andrés blogs at qualiacomputing.com.

The Qualia Research Institute (QRI) is a non-profit based in the Bay Area close to San Francisco which seeks to discover the computational properties of experience. QRI has a “full-stack approach” to the science of consciousness which incorporates philosophy of mind, neuroscience, and neurotechnology. For more information see: qualiaresearchinstitute.org

The Harvard Science of Psychedelics Club hosts events on psychedelic research, meditation, neuroscience, students sharing their own experiences, and much more.


Credits:

– Wallpaper group 632 rotating along each symmetry element – Nick Xu

– Many of the images are by Paul Nylander: http://bugman123.com/

– The Hyperbolic Honeycomb images and 3D prints are by Henry Segerman, who also has an awesome Youtube channel where he shows 3D printed math. We used his design to print the Honeycombs we were passing around during the lecture: https://www.youtube.com/user/henryseg

– Space-Time Dynamics in Video Feedback: Jim Crutchfield, Entropy Productions, Santa Cruz (1984): https://youtu.be/B4Kn3djJMCE

Many thanks to Andrew Zuckerman and Kenneth Shinozuka for helping organize this event. And thanks to David Pearce, Michael Johnson, Romeo Stevens, Quintin Frerichs, the anonymous trippers, and many others for making this work real.


And here are the slides:

 

Dynamic Systems animations:



Harmonic Society: 8 Models of Art for a Scientific Paradigm of Aesthetic Qualia


Description

Andrés Gómez Emilsson from the Qualia Research Institute gives a presentation about how art works according to modern neuroscience and philosophy of mind.

The video discusses eight different models of art: models 1 through 4 have been discussed in academic literature and the current intellectual zeitgeist, while models 5 through 8 are new, original, and the direct result of recent insights about consciousness as uncovered by modern neuroscience, philosophy of mind, and the work of the Qualia Research Institute.

Abstract:

We start by assuming that there are real stakes in art. This motivates the analysis of this subject matter, and it focuses where we place our gaze. We examine a total of eight models for “what art might be about”, divided into two groups. The first group of four are some of the most compelling contemporary models, which derive their strength from fields such as philosophy of language, economics, evolutionary psychology, and anthropology. These models are: (1) art as a word only definable in a family resemblance way with no necessary or sufficient features, (2) art as social signaling of desirable genetic characteristics, (3) art as Schelling point creation, and (4) art as the cultivation of sacred experiences. These four models, however enlightening, nonetheless only account for what David Marr might describe as the computational level of abstraction while leaving the algorithmic and implementation levels of abstraction unexamined. They explain what art is about in terms of why it exists and what its coarse effects are, but not the nature of its internal representations or its implementation. Hence we propose a second group of four models in order to get a “full-stack” view of art. These models are: (5) art as a tool for exploring the state-space of consciousness, (6) art as a method for changing the energy parameter of experience, (7) art as activities that induce neural annealing (which implements novel valence modulation, i.e. surprising pain/pleasure effects), and (8) art as an early prototype of a future affective language that will allow diverse states of consciousness to make sense of each other. These frameworks address how art interfaces with consciousness and how its key valuable features might be implemented neurologically. We conclude with a brief look at how embracing these new paradigms could, in principle, lead to the creation of a society free from suffering and interpersonal misunderstanding. Such a society, aka. Harmonic Society, would be designed with the effect of guaranteeing positive valence interactions using principles from a post-Galilean science of consciousness.

———————–

The 8 models of art are:

1. Art as family resemblance (Semantic Deflation)

2. Art as Signaling (Cool Kid Theory)

3. Art as Schelling-point creation (a few Hipster-theoretical considerations)

4. Art as cultivating sacred experiences (self-transcendence and highest values)

5. Art as exploring the state-space of consciousness (ϡ☀♘🏳️‍🌈♬♠ヅ)

6. Art as something that messes with the energy parameter of your mind (ꙮ)

7. Art as puzzling valence effects (emotional salience and annealing as key ingredients)

8. Art as a system of affective communication: a protolanguage to communicate information about worthwhile qualia (which culminates in Harmonic Society).


The presentation is based on an essay published in the Berlin-based art magazine Art Against Art (see: Issue #6).

Article is posted online here: Models 1 & 2, 3 & 4, 5 & 6, 7 & 8.


See more about the Qualia Research Institute at: https://www.qualiaresearchinstitute.org/

Andrés blogs at Qualia Computing: Top 10 Qualia Computing Articles


Infinite bliss!!!


And here are the slides:

Does Full-Spectrum Superintelligence Entail Benevolence?

Excerpt from: The Biointelligence Explosion by David Pearce


The God-like perspective-taking faculty of a full-spectrum superintelligence doesn’t entail distinctively human-friendliness any more than a God-like superintelligence could promote distinctively Aryan-friendliness. Indeed it’s unclear how benevolent superintelligence could want omnivorous killer apes in our current guise to walk the Earth in any shape or form. But is there any connection at all between benevolence and intelligence? Pre-reflectively, benevolence and intelligence are orthogonal concepts. There’s nothing obviously incoherent about a malevolent God or a malevolent – or at least a callously indifferent – Superintelligence. Thus a sceptic might argue that there is no link whatsoever between benevolence – on the face of it a mere personality variable – and enhanced intellect. After all, some sociopaths score highly on our [autistic, mind-blind] IQ tests. Sociopaths know that their victims suffer. They just don’t care.

However, what’s critical in evaluating cognitive ability is a criterion of representational adequacy. Representation is not an all-or-nothing phenomenon; it varies in functional degree. More specifically here, the cognitive capacity to represent the formal properties of mind differs from the cognitive capacity to represent the subjective properties of mind. Thus a notional zombie Hyper-Autist robot running a symbolic AI program on an ultrapowerful digital computer with a classical von Neumann architecture may be beneficent or maleficent in its behaviour toward sentient beings. By its very nature, it can’t know or care. Most starkly, the zombie Hyper-Autist might be programmed to convert the world’s matter and energy into heavenly “utilitronium” or diabolical “dolorium” without the slightest insight into the significance of what it was doing. This kind of scenario is at least a notional risk of creating insentient Hyper-Autists endowed with mere formal utility functions rather than hyper-sentient full-spectrum superintelligence. By contrast, full-spectrum superintelligence does care in virtue of its full-spectrum representational capacities – a bias-free generalisation of the superior perspective-taking, “mind-reading” capabilities that enabled humans to become the cognitively dominant species on the planet. Full-spectrum superintelligence, if equipped with the posthuman cognitive generalisation of mirror-touch synaesthesia, understands your thoughts, your feelings and your egocentric perspective better than you do yourself.

Could there arise “evil” mirror-touch synaesthetes? In one sense, no. You can’t go around wantonly hurting other sentient beings if you feel their pain as your own. Full-spectrum intelligence is friendly intelligence. But in another sense yes, insofar as primitive mirror-touch synaesthetes are prey to species-specific cognitive limitations that prevent them acting rationally to maximise the well-being of all sentience. Full-spectrum superintelligences would lack those computational limitations in virtue of their full cognitive competence in understanding both the subjective and the formal properties of mind. Perhaps full-spectrum superintelligences might optimise your matter and energy into a blissful smart angel; but they couldn’t wantonly hurt you, whether by neglect or design.

More practically today, a cognitively superior analogue of natural mirror-touch synaesthesia should soon be feasible with reciprocal neuroscanning technology – a kind of naturalised telepathy. At first blush, mutual telepathic understanding sounds a panacea for ignorance and egotism alike. An exponential growth of shared telepathic understanding might safeguard against global catastrophe born of mutual incomprehension and WMD. As the poet Henry Wadsworth Longfellow observed, “If we could read the secret history of our enemies, we should find in each life sorrow and suffering enough to disarm all hostility.” Maybe so. The problem here, as advocates of Radical Honesty soon discover, is that many Darwinian thoughts scarcely promote friendliness if shared: they are often ill-natured, unedifying and unsuitable for public consumption. Thus unless perpetually “loved-up” on MDMA or its long-acting equivalents, most of us would find mutual mind-reading a traumatic ordeal. Human society and most personal relationships would collapse in acrimony rather than blossom. Either way, our human incapacity fully to understand the first-person point of view of other sentient beings isn’t just a moral failing or a personality variable; it’s an epistemic limitation, an intellectual failure to grasp an objective feature of the natural world. Even “normal” people share with sociopaths this fitness-enhancing cognitive deficit. By posthuman criteria, perhaps we’re all quasi-sociopaths. The egocentric delusion (i.e. that the world centres on one’s existence) is genetically adaptive and strongly selected for over hundreds of millions of years. Fortunately, it’s a cognitive failing amenable to technical fixes and eventually a cure: full-spectrum superintelligence. The devil is in the details, or rather the genetic source code.


(Featured Image: Source)

Glossary of Qualia Research Institute Terms

This is a glossary of key terms and concept handles that are part of the memetic ecosystem of the Qualia Research Institute. Reading this glossary is itself a great way to become acquainted with this emerging memeplex. If you do not know what a memeplex is… you can find its definition in this glossary.


Basics

Consciousness (standard psychology, neuroscience, and philosophy term): There are over a dozen common uses for the word consciousness, and all of them are interesting. Common senses include: self-awareness, linguistic cognition, and the ability to navigate one’s environment. With that said, the sense of the word in the context of QRI is more often than not: the very fact of experience, that experience exists and there is something that it feels like to be. Talking loosely and evocatively- rather than formally and precisely- consciousness refers to “what experience is made of”. Of course formalizing that statement requires a lot of unpacking about the nature of matter, time, selfhood, and so on. But this is a start.

Qualia (standard psychology, neuroscience, and philosophy term): This word refers to the range of ways in which experience presents itself. Experiences can be richly colored or bare and monochromatic, they can be spatial and kinesthetic or devoid of geometry and directions, they can be flavorfully blended or felt as coming from mutually unintelligible dimensions, and so on. Classic qualia examples include things like the redness of red, the tartness of lime, and the glow of bodily warmth. However, qualia extends into categories far beyond the classic examples, beyond the wildest of our common-sense conceptions. There are modes of experience as altogether different from everything we have ever experienced as vision qualia is different from sound qualia.

Valence / Hedonic Tone (standard psychology, neuroscience, and philosophy term): How good or bad an experience feels – each experience expresses a balance between positive, neutral, and negative notes. The aspect of experience that accounts for its pleasant and unpleasant qualities. The term is evocative of pleasant sensations such as warming up one’s body when cold with a blanket and a cup of hot chocolate. That said, hedonic tone refers to a much broader class of sensations than just the feeling of warmth. For example, the music appreciation enhancement produced by drugs can be described as “enhanced hedonic tone in sound qualia”. Hedonic tone can appear in any sensory modality (touch, smell, sight, etc.), and even more generally, in every facet of experience (such as cognitive and proprioceptive elements, themselves capable of coming with their own flavor of euphoria/dysphoria). Experiences with both negative and positive notes are called “mixed”, which are the most common ones.


Helpful Philosophy

Ontology (standard high-level philosophy term; ref: 1): At the most basic level, an ontology is an account of what is real and what is good.

Epistemology (standard high-level philosophy term; ref: 1): The set of strategies, heuristics, and methods for knowing. In the context of consciousness research, what constitutes a good epistemology is a highly contentious subject. Some scientists argue that we should only take into account objectively-measurable third-person data in order to build models and postulate theories about consciousness (cf. heterophenomenology). On the other extreme, some argue that the only information that counts is first-person experiences and what they reveal to us (cf. new mysterianism). Somewhere in the middle, QRI fully embraces objective third-person data. And along with it, QRI recognizes the importance of skepticism and epistemic rigor when it comes to which first-person accounts should be taken seriously. Its epistemology does accept the information gained from alien state-spaces of consciousness as long as they meet some criteria. For example, we are very careful to distinguish between information about the intentional content of experience (what it was about) and information about its phenomenal character (how it felt). As a general heuristic, QRI tends to value more e.g. trip reports that emphasize the phenomenal character of the experience (e.g. “30Hz flashes with slow-decay harmonic reverb audio hallucinations”) relative to intentional content (e.g. “the DMT alien said I should learn to play the guitar”). Ultimately, first-person and third-person data are complementary views of the same substrate of consciousness (cf. dual-aspect monism), and so are both equally necessary for a complete scientific account of consciousness.

Functionalism (standard high-level philosophy term; ref: 1, 2): In Philosophy of Mind, functionalism is the view that consciousness is produced (and in some cases identical with) not only by the input-output mapping of an information-processing system, but also by the internal relationships that make that information-processing possible. In light of Marr’s Levels of Analysis (see below), we could say that functionalism identifies the content of conscious experience with the algorithmic level of analysis. Hence this philosophy is usually presented in conjunction with the concept of “substrate neutrality” which posits that the material makeup of brains is not necessary for the arising of consciousness out of it. If we implemented the same information-processing functions that are encoded in the neural networks of a brain using rocks, buckets of water, or a large crowd instantiating a large computer, we would also generate the same experiences the brain generates on its own. Importantly, functionalism tends to deny any essential role of the substrate in the generation of consciousness, and will typically also deny any significant interaction between levels of analysis (see below).

Eliminativism (standard high-level philosophy term; ref: 1, 2, 3): In Philosophy of Mind, eliminativism refers to a cluster of ideas concerning whether the word “consciousness” is clear enough to be useful for making sense of how brains work. One key idea in eliminativist views is that most of the language that we use to talk about experiences (from specific emotions to qualia) is built on top of folk-psychology rather than physical reality. In a way, terms such as “experience” and “feelings” are an interface for the brain to model itself and others in a massively simplified but adaptive way. There is no reason why our evolved intuitions about how the brain works should even approximate how it really works. In many cases, eliminativists advocate starting from scratch and abandoning our intuitions about experience, sticking to hard physical and computational analysis of the brain as empirically measured. This view suggests that once we truly understand scientifically how brains work, the language we will use to talk about it will look nothing like the way we currently speak about our experiences, and that this change will be so dramatic that we would effectively start thinking as if “consciousness never existed to begin with”.

Presentism (standard high-level philosophy term; ref: 1): The view that only the present is real, the past and the future being illusory inferences and projections made in the present. Oftentimes presentism posits that change is a fundamental aspect of the present and that the feeling of the passage of time is based on the ever-changing nature of reality itself.

Eternalism (standard high-level philosophy term; ref: 1): The view that every here-and-now in reality is equally real. Rather than thinking of the universe as a “now” sandwiched between a “past” and “future”, eternalism posits that it is more accurate to simply describe pairs of moments as having a “before” and “after” relationship, but neither of them being in the future or past. Some of the strongest arguments for eternalism come from Special and General Relativity (see: Rietdijk–Putnam argument), where space-time forms a continuous 4-dimensional geometric shape that stands together as a whole, and where any notion of a “present” is only locally valid. In some sense, eternalism says that all of reality exists in an “eternal now” (including your present, past, and future selves).

Personal Identity (standard high-level philosophy term; ref: 1): The relevant sense of this term for our purposes refers to the set of questions about what constitutes the natural unit for subjects of experience. Questions such as “will the consciousness who wakes up in my current body tomorrow morning be me?”, “if we make an atom-by-atom identical copy of me right now, will I start existing in it as well?”, “if you conduct a Wada Test, is the consciousness generated by my right hemisphere alone also me?”, and so on.

Closed Individualism (coined by Daniel Kolak; ref: 1): In its most basic form, this is the common-sense personal identity view that you start existing when you are born and stop existing when you die. According to this view each person is a different subject of experience with an independent existence. One can believe in a soul ontology and be a Closed Individualist at the same time, with the correction that you exist as long as your soul exists, which could be the case even before or after death.

Empty Individualism (coined by Daniel Kolak; ref: 1, 2, 3): This personal identity view states that each “moment of experience” is its own separate subject. While it may seem that we exist as persons with an existence that spans decades, Empty Individualism does not associate a single subject to each person. Rather, each moment a new “self” is born and dies, existing for as long as the conscious event takes place (something that could be anywhere between a femtosecond and a few hundred milliseconds, depending on which scientific theory of consciousness one believes in).

Open Individualism (coined by Daniel Kolak; ref: 1, 2, 3, 4): This is the personal identity view that we are all one single consciousness. The apparent partitions and separations between the universal consciousness, in this view, are the result of partial information access from one moment of experience to the next. Regardless, the subject who gets to experience every moment is the same. Each sentient being is fundamentally part of the same universal subject of experience.

Goldilocks Zone of Oneness (QRI term; 1, 2, 3): Having realized that there are both positive and negative psychological aspects to each of the three views of personal identity discussed (Closed, Empty, Open Individualism), the Goldilocks Zone of Oneness emerges as a conceptual resolution. Open Individualism comes with a solution to the fear of death, but it also can give rise to a sort of cosmic solipsism. Closed Individualism allows you to feel fundamentally special, but also disconnected from the universe and fundamentally misunderstood by others. Empty Individualism is philosophically satisfying, but it may come with a sense of lack of agency and the fear of being a time-slice that is stuck in a negative place. The Goldilocks Zone of Oneness posits that there is a way to transcend classical logic in personal identity, and that the truth incorporates elements of all of the three views at once. In the Goldilocks Zone of Oneness one is simultaneously part of a whole but also not the entirety of it. One can relate with others by having a shared nature, while also being able to love them on their own terms by recognizing their unique identity. This view has yet to be formalized, but in the meantime it may prove to be pragmatically useful for community-building.

The Problem of Other Minds (standard high-level philosophy term; ref: 1, 2): This is the philosophical conundrum of whether other people (and sentient beings in general) are conscious. While your own consciousness is self-evidence, the consciousness of others is inferred. Possible solutions involve technologies such as the Generalized Wada Test (see below), phenomenal puzzles, and thalamic bridges, which you can use to test the consciousness of another being by having it solve a problem that can only be solved by making comparisons between qualia values.

Solipsism (standard high-level philosophy term; ref: 1, 2, 3): In its classic formulation, solipsism refers to a state of existence in which the only person who is conscious is “oneself”, which resides in the body of an individual human over time. A more general version of solipsism involves crossing it with personal identity views (see above). Through this lens, the classic person-centric formulation of solipsism refers exclusively to a Closed Individualist universe. Alternatively, Open Individualism also has a solipsistic interpretation – it is thus compatible with (and in at least in one sense entails) solipsism: the entire multiverse of experiences are all experiences of a single solipsistic cosmic consciousness. With an Empty Individualist universe, too, we can have a solipsistic interpretation of reality. In one version you use epiphenomenalism to claim that this moment of experience is the only one that is conscious even though the whole universe still exists and it had an evolutionary path that led it to the configuration in which you stand right now. In another version, one’s experience is the result of the fact that in the cosmic void everything can happen. This is not because it is likely, but because there is a boundless amount of time for it to happen. That is, no matter how thin its probability is, it will still take place at some point (see: Boltzmann brain). That said, one’s present experience -with its highly specific information content- being the only one that exists seems very improbable a priori. Like imagining that despite the fact that “the void can give rise to anything” the only thing that actually gets materialized is an elephant. Why would it only produce an elephant, of all things? Likewise, solipsistic Empty Individualism has this problem – why would this experience be the only one? To cap it off, we can also reason about solipsism in its relation to hybrid views of personal identity. In their case solipsism either fails, or its formulation needs to be complicated significantly. This is partly why the concept of the Goldilocks Zone of Oneness (see above) might be worth exploring, as it may be a way out of ultimate solipsism. On a much more proximal domain, it may be possible to use Phenomenal Puzzles, Wada tests, and ultimately mindmelding to test the classical (Closed Individualist) formulation of solipsism.

Suffering Focused Ethics (recent philosophy term from rationalist-adjacent communities; ref: 1, 2) The view that our overriding obligation is to focus on suffering. In particular, taking seriously the prevention of extreme suffering is one of the features of this view. This is not unreasonable if we take into account the logarithmic scales of pain and pleasure into account, which suggest that the majority of suffering is concentrated in a small percent of experiences of intense suffering. Hence why caring about the extreme cases matters so much.

Antinatalism (standard high-level philosophy term; ref: 1, 2): This is the view that being born entails a net negative. Classic formulations of this view tend to implicitly assume Closed Individualism, where there is someone who may or may not be born and it is meaningful to consider this a yes or no question with ontological bearings. Under Open Individualism the question becomes whether there should be any conscious being at all, for neither preventing someone’s birth nor committing an individual suicide entail the real birth or death of a consciousness. They would merely add or subtract from the long library corridors of experiences had by universal consciousness. And in Empty Individualism, antinatalism might be seen through the light of “preventing specific experiences with certain qualities”. For example, having an experience of extreme suffering is not harming a person (though it may have further psychological repercussions), but rather harming that very experience in an intrinsic way. This view would underscore the importance of preventing the existence of experiences of intense suffering rather than preventing the existence of people as such. A final note on antinalism is that even in its original formulation we encounter the problem that selection pressures makes any trait that reduces inclusive fitness disappear in the long run. The traits that predispose to such views would simply be selected out. A more fruitful way of improving the world is to encourage the elimination of suffering in ways that do not reduce inclusive fitness, such as the prevention of genetic spell errors and diseases that carry a high burden of suffering.

Tyranny of the Intentional Object (coined by David Pearce; ref: 1, 2): The way our reward architecture is constructed makes it difficult for us to have a clear sense of what it is that we enjoy about life. Our brains reinforce the pursuit of specific objects, situations, and headspaces, which gives the impression that these are intrinsically valuable. But this is an illusion. In reality such conditions trigger positive valence changes to our experience, and it is those that we are really after (as evidenced by the way in which our reward architecture is modified in presence of euphoric and dysphoric drugs and external stimuli such as music). We call this illusion the tyranny of the intentional object because in philosophy “intentionality” refers to “what the experience is about”. Our world-simulations chain us to the feeling that external objects, circumstances, and headspaces are the very source of value. More so, dissociating from such sources of positive valence triggers negative valence, so critical insight into the way our reward architecture really works is itself negatively reinforced by it.


Formalism Terms

Formalism (standard high-level philosophy term; ref: 1, 2): Formalism is a philosophical and methodological approach for analyzing systems which postulates the existence of mathematical objects such that their mathematical features are isomorphic to the properties of the system. An example of a successful formalism is the use of Maxwell’s equations in order to describe electromagnetic phenomena.

Qualia Formalism (QRI term; 1, 2, 3): Qualia Formalism means that for any given physical system that is conscious, there will be a corresponding mathematical object associated to it such that the mathematical features of that object will be isomorphic to the phenomenology of the experience generated by the system.

Marr’s Levels of Analysis (standard cognitive science term; ref: 1, 2): This powerful analytic framework was developed by cognitive scientist David Marr to talk more precisely about vision, but it is more broadly applicable to information processing systems in general. It is a way to break down what a system does in a conceptually clear fashion that lends itself to a clean analysis.

Computational Level (standard cognitive science term; ref: 1, 2): The first of three of Marr’s Levels of Analysis, the Computational Level of abstraction describes what the system does from a third-person point of view. That is, the input-output mapping, the runtime complexity for the problems it can solve, and the ways in which it fails are all facts about a system that are at the computational level of abstraction. In a simple example case, we can describe an abacus at the computational level by saying that it can do sums, subtractions, multiplications, divisions, and other arithmetic operations.

Algorithmic Level (standard cognitive science term; ref: 1, 2): The second of three of Marr’s Levels of Analysis, the Algorithmic Level of abstraction describes the internal representations, operations, and their interactions used to transform the input into the output. In aggregate, representations, operations, and their interactions constitute the algorithms of the system. As a general rule, we find that there are many possible algorithms that give rise to the same computational-level properties. Following the simple example case of an abacus, the algorithmic-level account would describe how passing beads from one side to another and using each row to represent different orders of magnitude are used to instantiate algorithms to perform arithmetic operations.

Implementation Level (standard cognitive science term; ref: 1, 2): The third of three of Marr’s Levels of Analysis, the Implementation Level of abstraction describes the way in which the system’s algorithms are physically instantiated. Following the case of the abacus, an implementation-level account would detail how the various materials of the abacus are put together in order to allow the smooth passing of beads between the sides of each row and how to prevent them from sliding by accident (and “forgetting” the state).

Interaction Between Levels (obscure cognitive science concept handle; ref: 1, 2): Some information-processing systems can be fully understood by describing each of Marr’s Levels of Analysis separately. For example, it does not matter whether an abacus is made of metal, wood, or even if it is digitally simulated in order to explain its algorithmic and computational-level properties. But while this is true for an abacus, it is not the case for analog systems that leverage the unique physical properties of their components to do computational shortcuts. In particular, in quantum computing one intrinsically requires an understanding of the implementation-level properties of the system in order to explain the algorithms used. Hence, for quantum computing, there are strong interactions between levels of analysis. Likewise, we believe this is likely going to be the case for the algorithms our brains perform by leveraging the unique properties of qualia.

Natural Kind (standard high-level philosophy term; ref: 1, 2): Natural kinds are things whose objective existence makes it possible to discover durable facts about them. They are the elements of a “true ontology” for the universe, and what “carves reality at its joints”. This is in contrast to “reifications” which are aggregates of elements with no unitary independent existence.

State-Space (standard term in physics and mathematics; ref: 1, 2): A state-space of a system is a geometric map where each point corresponds to a particular state of the system. Usually the space has a Euclidean geometry with a number of dimensions equal to the number of variables in the system, so that the value of each variable is encoded in the value of a corresponding dimension. This is not always the case, however. In the general case, not all points in the state-space are physically realizable. Additionally, some system configurations do not admit a natural decomposition into a constant set of variables. This may give rise to irregularities in the state-space, such as non-Euclidean regions or a variable number of dimensions.

State-Space of Consciousness (coined by David Pearce; 1, 2, 3): This is a hypothetical map that contains the set of all possible experiences, organized in such a way that the similarities between experiences are encoded in the geometry of the state-space. For example, the experience you are having right now would correspond to a single point in the state-space of consciousness, with the neighboring experiences being Just Noticeably Different from your experience right now (e.g. simplistically, we could say they would be different from your current experience “by a single pixel”).

Qualia Value (QRI term; ref: 1): Starting with examples-  the scent of cinnamon, a spark of sourness, a specific color hue, etc. are all qualia values. Any particular quality of experience that cannot be decomposed further into overlapping components is a qualia value.

Qualia Variety (QRI term; ref: 1): A qualia variety refers to the set of qualia values that belong to the same category (for example, tentatively, phenomenal colors are all part of the same qualia variety, which is different from the qualia variety of phenomenal sounds). A possible operationalization for qualia varieties involves the construction of equivalence classes based on the ability to transform a given qualia value into another via a series of Just-Noticeable Differences. For example, in the case of color, we can transform a given qualia value like a specific shade of blue, into another qualia value like a shade of green by traversing across a straight line from one to the other in the CIELAB color space. Tentatively, it is not possible to do the same between a shade of blue and a particular phenomenal sound. That said, the large number of unknowns (and unknown unknowns!) about the state-space of consciousness does not allow us to rule out the existence of qualia values that can bridge the gap between color and sound qualia. If that turned out to be the case, we would need to rethink our approach to defining qualia varieties.

Region of the State-Space of Consciousness (QRI term; ref: 1, 2): A set of possible experiences that are similar to each other in some way. Given an experience, the “experiences nearby in the state-space of consciousness” are those that share its qualities to a large degree but have variations. The term can be used to point at experiences with a given property (such as “high-valence” and “phenomenal color”).

The Binding Problem (standard psychology, neuroscience, and philosophy term; ref: 1, 2): The binding problem (also called the combination problem) arises from asking the question: how is it possible that the activity of a hundred billion neurons that are spatially distributed can simultaneously contribute to a unitary moment of experience? It should be noted that in the classical formulation of the problem we start with an “atomistic” ontology where the universe is made of space, particles, and forces, and the question then becomes how spatially-distributed discrete particles can “collaborate” to form a unified experience. But if one starts out with a “globalistic” ontology where the universe is made of a universal wavefunction, then the question that arises is how something that is fundamentally unitary (the whole universe) can give rise to “separate parts” such as individual experiences, which is often called “the boundary problem”. Thus, the “binding problem” and “the boundary problem” are really the same problem, but starting with different ontologies (atomistic vs. globalistic).

Phenomenal Binding (standard high-level philosophy term; ref: 1, 2): This term refers to the hypothetical mechanism of action that enables information that is spatially-distributed across a brain (and more generally, a conscious system) to simultaneously contribute to a unitary discrete moment of experience.

Local Binding (lesser-known cognitive science term; ref: 1): Local binding refers to the way in which the features of our experience are interrelated. Imagine you are looking at a sheet of paper with a drawing of a blue square and a yellow triangle. If your visual system works well you do not question which shape is colored blue; the color and the shapes come unified within one’s experience. In this case, we would say that color qualia and shape qualia are locally bound. Disorders of perception show that this is not always the case: people with simultagnosia find it hard to perceive more than one phenomenal object at a time and thus would confuse the association between the colors and shapes they are not directly attending to, people with schizophrenia have local binding problems in the construction of their sense of self, and people with motion blindness have a failure of local binding between sensory stimuli separated by physical time.

Global Binding (lesser-known cognitive science term; ref: 1, 2): Global binding refers to the fact that the entirety of the contents of each experience is simultaneously apprehended by a unitary experiential self. As in the example for local binding, while blue and the square (and the yellow and the triangle) are locally bound into separate phenomenal objects, both the blue square and the yellow triangle are globally bound into the same experience.


The Mathematics of Valence

Valence Realism (QRI term; ref: 1): This is the claim that valence is a crisp phenomenon of conscious states upon which we can apply a measure. Also defined as: “Valence (subjective pleasantness) is a well-defined and ordered property of conscious systems.”

Valence Structuralism (QRI term; ref: 1): Valence could have a simple encoding in the mathematical representation of a system’s qualia.

valence_structuralism

Symmetry Theory of Valence (QRI term; 1, 2, 3): Given a mathematical object isomorphic to the qualia of a system, the mathematical property which corresponds to how pleasant it is to be that system is that object’s symmetry.

Valence Gradients (QRI term; ref: 1, 2): It is postulated that one of the important inputs that contributes to our decision-making involves “valence gradients”. To understand what a valence gradient is, it is helpful to provide an example. Imagine coming back from dancing in the rain and feeling pretty cold. In order to warm yourself up you get into the shower and turn on the hot water. Ouch! Too hot, so you dial down the temperature. Brrr! Now it’s too cold, so you dial up the temperature just a little. Ah, just perfect! See, during this process you evaluated, at each point, in what way you could modify your experience in order to make it feel better. At first the valence gradient was pointing in the direction of higher temperature. As soon as you felt it being too hot, the valence gradient changed direction and pointed to lower temperature. And so on until it feels like there is nothing else you could do to improve how you feel. In the more general case, we posit that a significant input into our decision-making is the direction of change along which we believe our experience would improve. At an implementation level of analysis (see above) the very syntax of our experience might be built with a landscape of valence gradients. In a sense, noticing them is possible, but it is a task akin to the metaphor of a fish not knowing what water is. We use valence gradients to navigate both the external and internal world in such a basic and all-pervasive way that missing this fact altogether is easy. When we justify why we did such and such, we often forget that a big component of the decision was made based on how each of the options felt. The difficulty we face when trying to point at the specific valence gradients that influence our decision-making is one of the reasons why the tyranny of the intentional object (see above) arises, which is that what pulls and pushes us is not explicitly represented in our conceptual scheme.

This slideshow requires JavaScript.

CDNS Analysis (QRI term; ref: 1, 2): A scientific and philosophical hypothesis with implications for measuring valence in conscious systems. Namely, the hypothesis is that the Symmetry Theory of Valence is expressed in the structure of neural patterns over time, implying that the valence of a brain will be in part determined by neural dissonance, consonance, and noise. This makes precise, empirically testable predictions within paradigms such as Connectome-Specific Harmonic Waves.


Research Paradigms

Evolutionary Qualia (QRI term): Evolutionary Qualia is a scientific discipline that will emerge as the science of consciousness improves to the point where cellular gene expression analysis, brain imaging, and interpretation algorithms get to infer the qualia present in the experience of the brains of animals in general. For instance, we may find out that certain combinations of receptor types and protein shapes inside neurons of the visual cortex are necessary and sufficient for generating color qualia. Additionally, such understanding could be complemented with an information-theoretic account of why color qualia is more effective (cost-benefit-wise) for certain information-processing than other qualia. Together, these two kinds of understanding will allow us to explain why the specific qualia that we have was recruited by natural selection for information-processing purposes. Evolutionary Qualia is the (future) discipline that explains from an evolutionary point of view why we have the specific qualia and patterns of local binding that we do (said differently, it will explain why “the walls of our world-simulation are painted the way they are”). So while Evolutionary Psychology may explain why we have evolved to have some emotions from the point of view of their behavioral effects, Evolutionary Qualia will explain why the emotions feel the way they do and how those specific feelings happen to have the right “shape” for the information-processing tasks they accomplish.

Algorithmic Reduction (QRI term; ref: 1, 2): A reduction is a model that explains a set of behaviors, often very complex and diverse, in terms of the interaction between variables. A successful reduction is one that explains the intricacies and complexities present in the set of behaviors as emergent effects from a much smaller number of variables and their interactions. A specific case is that of “atomistic reductions” which decompose a set of behaviors in terms of particles interacting with each other (e.g. ideal gas laws from statistical mechanics in physics). While many scientifically significant reductions are atomistic in nature, one should not think that every phenomenon can be successfully reduced atomistically (e.g. double-slit experiment). Even when a set of behaviors cannot be reduced atomistically we may be able to algorithmically reduce it. That is, to identify a set of processes, internal representations, and interactions that when combined give rise to the set of observed behaviors. This style of reduction is very useful in the field of phenomenology since it can provide insights into how complex phenomena (such as psychedelic hallucinations) emerge out of a few relatively simple algorithmic building blocks. This way we avoid begging the question by not assuming an atomistic ontology in a context where it is not clear what atoms correspond to.

Psychedelic Cryptography (QRI term; ref: 1, 2, 3): Encoding information in videos, text, abstract paintings, etc. such that only people who are in a specific state of consciousness can decode it. A simple example is the use of alternations in after-image formation on psychedelics (enhanced persistence of vision, aka. tracers) to paint a picture by presenting the content of an image one column of pixels at a time. Sober individuals only see a column of pixels while people high on psychedelics will see a long trace forming parts of an image that can be inferred by paying close attention. In general, psychedelic cryptography can be done by taking advantage of any of the algorithms one finds with algorithmic reductions of arbitrary states of consciousness. In the case of psychedelics, important effects that can be leveraged include tracers, pareidolia, drifting, and symmetrification.enhanced_mturk_1

Psychedelic Turk (QRI term; ref: 1, 2, 3, 4): Mechanical Turk is a human task completion platform that matches people who need humans to do many small (relatively) easy tasks with humans willing to do a lot of small (relatively) easy tasks. Psychedelic Turk is akin to Mechanical Turk, but where workers disclose the state of consciousness they are in. This would be helpful for task requesters because many tasks are more appropriate for people in specific states of consciousness. For example, it is better to test ads intended to be seen by drunk people by having people who are actually drunk evaluate them, as opposed to asking sober people to imagine how they would perceive them while drunk. Likewise, some high-stakes tasks would benefit from being completed by people who are demonstrably very alert and clear-headed. And for foundational consciousness research, Psychedelic Turk would be extremely useful as it would allow researchers to test how people high on psychedelics and other exotic agents process information and experience emotions usually inaccessible in sober states.

Generalized Wada Test (QRI term; ref: 1, 2, 3): This is a generalization of the Wada Test where rather than pentobarbital being injected in just one hemisphere while the other hemisphere is kept sober, one injects substance A in one hemisphere and substance B on the other. This could be used to improve our epistemology about various states of consciousness. By keeping one hemisphere in a state with robust linguistic ability the other hemisphere could be used to explore alien-state spaces of consciousness and allow for real-time verbal interpretation. The caveats and complications are myriad, but the general direction this concept handle is pointing to is worth exploring.


Phenomenology

Self-Locating Uncertainty (originally a physics term but we also use it for describing a phenomenal character of experience; ref: 1, 2): The uncertainty that one has about who and where one is. This is relevant in light of states of consciousness that are common on high-dose psychedelics, mental illnesses, and meditation, where the information about one’s identity and one’s place in the world is temporarily inaccessible. Very high- and low-valence states tend to induce a high level of self-locating uncertainty as the information content of the experience is over-written by very simple patterns that dominate one’s attention. Learning to navigate states with self-locating uncertainty without freaking out is a prerequisite for studying alien state-spaces of consciousness.

Phenomenal Time (standard high-level philosophy term; ref: 1): The felt-sense of the passage of time. This is in contrast to the physical passage of time. Although physical time and phenomenal time tend to be intimately correlated, as you will see in the definition of “exotic phenomenal time” this is not always the case.

Phenomenal Space (standard high-level philosophy term; ref: 1, 2): The experience of space. Usually our sense of space represents a smooth 3D Euclidean space in a projective fashion (with variable scale encoding subjective distance). In altered states of consciousness phenomenal space can be distorted, expanded, contracted, higher-dimensional, topologically distinct, and even geometrically modified as in the case of hyperbolic geometry while on DMT (see below).

Pseudo-Time Arrow (QRI term; ref: 1): This is a formal model of phenomenal time. It utilizes a simple mathematical object: a graph. The nodes of the graph are identified with simple qualia values (such as colors, basic sounds, etc.) and the edges are identified with local binding connections. According to the pseudo-time arrow model, phenomenal time is isomorphic to the patterns of implicit causality in the graph, as derived from patterns of conditional statistical independence.

Exotic Phenomenal Time (QRI term; ref: 1): It is commonly acknowledged that in some situations time can feel like it is passing faster or slower than normal (cf. tachypsychia). What is less generally known is that experiences of time can be much more general, such as feeling like time stops entirely or that one is stuck in a loop. These are called exotic phenomenal time experiences, and while not very common, they certainly are informative about what phenomenal time is. Deviations from an apparent universal pattern are usually scientifically significant.

Reversed Time (QRI term; ref: 1): This is a variant of exotic phenomenal time in which experience seems to be moving backwards in time. “Inverted tracers” are experienced where one first experiences the faint after-images of objects before they fade in, constitute themselves, and then quickly disappear without a trace. According to the pseudo-time arrow model this experience can be described as an inversion of the implicit arrow of causality, though how this arises dynamically is still a mystery.

Moments of Eternity (common psychedelic phenomenology term; ref: 1): This exotic phenomenal time describes experiences where all apparent temporal movement seems to stop. One’s experience seems to have an unchanging quality and there is no way to tell if there will ever be something else other than the present experience in the whole of existence. In most cases this state is accompanied by intense emotions of simple texture and immediacy (rather than complex layered constructions of feelings). The experience seems to appear as the end-point and local maxima of annealing on psychedelic and dissociative states. That is, it often comes as metastable “flashes of large-scale synchrony” that are created over the course of seconds to minutes and decay just as quickly. Significantly, sensory deprivation conditions are ideal for the generation of this particular exotic phenomenal time.

Timelessness (QRI term; ref: 1): Timelessness is a variant of exotic phenomenal time where causality flows in a very chaotic way at all scales. This prevents forming a general global direction for time. In the state, change is perceptible and it is happening everywhere in your experience, and yet it seems as if there is no consensus among the different parts of your experience about the direction of time. That is, there is no general direction along which the experience seems to be changing as a whole over time. The chaotic bustle of changes that make up the texture of the experience are devoid of a story arc, and yet remain alive and turbulent. Trip reports suggest that the state that arises at the transition points between dissociative plateaus has this noisy timelessness quality (e.g. coming up on ketamine). Listening to green noise evokes the general idea, but you need to imagine that happening on every sensory modality and not just audio.

Time Loops (common psychedelic phenomenology term; ref: 1): This is perhaps the most common exotic phenomenal time experience that people have on psychedelics and dissociatives. This is due to the fact that, while it can be generated spontaneously, it is relatively easy to trigger by listening to repetitive music (e.g. a lot of EDM, trance, progressive rock, etc.), repetitive movements (e.g. walking, dancing), and repetitive thoughts (e.g. talking about the same topic for a long time) all of which are often abundant in the set and setting of psychedelic users. The effect happens when your projections about the future and the past are entirely informed by what seems like an endlessly repeating loop of experience. This often comes with intense emotions of its own (which are unusual and outside of the normal range of human experience), but it also triggers secondary emotions (which are just normal emotions amplified) such as fear and worry, or at times wonder and bliss. The pseudo-time arrow model of phenomenal time describes this experience as a graph in which the local patterns of implicit causality form a cycle at the global scale. Thus the phenomenal past and future merge at their tails and one inhabits an experiential world that seems to be infinitely-repeating.

Time Branching (QRI term; ref: 1, 2): A rare variant of exotic phenomenal time in which you feel like you are able to experience more than one outcome out of events that you witness. Your friend stands up to go to the bathroom. Midway there he wonders whether to go for a snack first, and you see “both possibilities play out at once in superposition”. In an extreme version of this experience type, each event seems to lead to dozens if not hundreds of possible outcomes at once, and your mind becomes like a choose-your-own-adventure book with a broccoli-like branching of narratives, and at the limit all things of all imaginable possible timelines seem to happen at once and you converge on a moment of eternity, thus transitioning out of this variety. We would like to note that a Qualia Computing article delved into the question of how to test if the effect actually allows you to see alternative branches of the multiverse. The author never considered this hypothesis plausible, but the relative ease of testing it made it an interesting, if wacky, research lead. The test consisted of trying to tell apart the difference between a classical and a quantum random number generator in real time. The results of the experiment are all null for the time being.

World-Sheet (QRI term; ref: 1, 2): We represent modal and amodal information in our experience in a projective way. In most common cases, this information forms a 2D “sheet” that encodes the distance to the objects around you, which can be used as a depth-map to navigate your surroundings. A lot of the information we experience is in the combination of this sheet and phenomenal time (i.e. how it changes over time).

Hyperbolic Phenomenal Space (QRI term; ref: 1, 2): The local curvature of the world-sheet encodes a lot of information about the scene. There is a sense in which the “energy” of the experience is related to the curvature of the world-sheet (in addition to its phenomenal richness and brightness). So when one raises the energy of the state dramatically (e.g. by taking DMT) the world-sheet tends to instantiate configurations with very high-curvature. The surface becomes generically hyperbolic, which profoundly alters the overall geometry of one’s experience. A lot of the accounts of “space expansion” on psychedelics can be described in terms of alterations to the geometry of the world-sheet.

Dimensionality of Consciousness (QRI term; ref: 1, 2, 3): A generative definition for the dimensionality of a moment of experience can be “the highest virtual dimension implied by the network of correlations between globally bound degrees of freedom”. Admittedly, at the moment this is more of an intuition pump than a precise formalism, but a number of related phenomena suggest there is something in this general direction. For starters, differences between degrees of pain and pleasure are often described in terms of qualitative changes with phase transitions between them. Likewise, one generally experiences a higher degree of emotional involvement in a given stimuli the more sensory channels one is utilizing to interact with it. Pleasure that has cognitive, emotional, and physical components in a coordinated fashion is felt as much more profound and significant than pleasure that only involves one of those “channels”, or even pleasure that involves all three but where they lack coherence between them. Another striking example involves the states of consciousness induced by DMT, in which there are phase-transitions between the levels. These phase transitions seem to involve a change in the dimensional character of the hallucinations: in addition to hyperbolic geometry, DMT geometry involves a wide range of phenomena with virtual dimensions. On lower doses the hallucinations take the shape of 2D symmetrical plane coverings. On higher doses those covers transform into 2.5D wobbly worldsheets, and on higher doses still into 3D symmetrical tessellations and rooms with 4D features. For example, the DMT level above 3D tessellations has its “walls” covered with symmetrical patterns that are correlated with one another in such a way that they generate a “virtual” 4th dimension, itself capable of containing semantic content. We suspect that one of the reasons why MDMA is so uniquely good at healing trauma is that in order to address a high-dimensional pain you need a high-dimensional pleasure to hold space for it. MDMA seems to induce a high-dimensional variety of feelings of wellbeing, which can support and smooth a high-dimensional pain like such as those which underly traumatic memories.


Qualia Futurology

Meme (standard science/psychology term coined by Richard Dawkins; 1): A “meme” is a cultural unit of information capable of being transmitted from one mind to another. Examples of memes include jokes, hat styles, window-dressing color palettes, and superstitions.

Memeplex (lesser known term coined by Richard Dawkins; 1, 2): A “memeplex” is a set of memes that, when simultaneously present, increase their ability to replicate (i.e. to be spread from one mind to another). Memeplexes do not need to say true things in order to be good at spreading; many strategies exist to motivate humans to share memes and memeplexes, ranging from producing good feelings (e.g. jokes), being threatening (e.g. apostasy), to being salient (e.g. famous people believe in them). A classic example of a memeplex is that of an ideology such as libertarianism, communism, capitalism, etc.

Full-Stack Memeplex (QRI term; ref: 1, 2): A “full-stack memeplex” is a memeplex that provides an answer to most common human questions. While the scope of a memeplex like “libertarianism” extends across a variety of fields including economics and ethics, it is not a full-stack memeplex because it does not attempt to answer questions such as “why does anything exist?”, “why are the constants of nature the way they are?” and “what happens after we die?”. Religions and some philosophies like existentialism, Buddhism, and the LessWrong Sequences are full-stack memeplexes. We also consider the QRI ecosystem to contain a full-stack memeplex.

19466332_10155463900057059_96543544342857226_o

Hedonistic Imperative (coined by David Pearce; ref: 12): The Hedonistic Imperative is a book-length internet manifesto written by David Pearce which outlines how suffering will be eliminated with biotechnology and why our biological descendants are likely to be animated by gradients of information-sensitive bliss.

Abolitionism (coined by David Pearce; ref: 1): In the context of transhumanism, Abolitionism refers to the view in ethics that we should eliminate all forms of involuntary suffering both in human and non-human animals alike. The term was coined by David Pearce.

Fast Euphoria (QRI term; ref: 1): This is one of the main dimensions along which a drug can have effects, roughly described as “high-energy and high-valence” (with high-loading terms including: energetic, charming, stimulating, sociable, erotic, etc.).

Slow Euphoria (QRI term; ref: 1): This is one of the main dimensions along which a drug can have effects, roughly described as “low-energy and high-valence” (with high-loading terms including: calming, relieving, blissful, loving, etc.).

Spiritual/Philosophical Euphoria (QRI term; ref: 1, 2): This is one of the main dimensions along which a drug can have effects, roughly described as “high-significance and high-valence” (with high-loading terms including: incredible, spiritual, mystical, life-changing, interesting, colorful, etc.).

Wireheading (standard psychology, neuroscience, and philosophy term; 1, 2): The act of modifying a mind’s reward architecture and hedonic baseline so that it is always generating experiences with a net positive valence (whether or not they are mixed).

Wireheading Done Right (QRI term; ref: 1, 2): Wireheading done in such a way that one can remain rational, economically productive, and ethical. In particular, it entails (1) taking into account neurological negative feedback systems, (2) avoiding reinforcement cycles that narrow one’s behavioral focus, and (3) preventing becoming a pure replicator (see below). A simple proof of concept reward architecture for Wireheading Done Right is to cycle between different kinds of euphoria, each with immediate diminishing returns, and with the ability to make it easier to experience other kinds of euphoria. This would give rise to circadian cycles with stages involving fast, slow, and spiritual/philosophical euphoria at different times. Wireheading Done Right entails never getting stuck while always being in a positive state.

Pure Replicator (QRI term; 1, 2): In the context of agents and minds, a Pure Replicator is an intelligence that is indifferent towards the valence of its conscious states and those of others. A Pure Replicator invests all of its energy and resources into surviving and reproducing, even at the cost of continuous suffering to themselves or others. Its main evolutionary advantage is that it does not need to spend any resources making the world a better place.

Consciousness vs. Replicators (QRI term; 1, 2): This is a reframe of the big-picture narrative of the meaning of life in which the ultimate battle is between the act of reproducing for the sake of reproduction and the act of seeking the wellbeing of sentient beings for the sake of conscious value itself.

Maximum Effector (QRI term; 1): A Maximum Effector is an entity that uses all of its resources for the task of causing large effects, irrespective of what they may be. There is a sense in which most humans have a Maximum Effector side. Since causing large effects is not easy, one can reason that for evolutionary reasons people find such an ability to be a hard-to-fake signal of fitness. Arrogance and power may not be all that people find attractive, but they do play a role in what makes someone seem sexy to others. Hence why, unfortunately, people research how to cause large effects even if they are harmful to everyone. The idealized version of a Maximum Effector, however, would be exclusively interested in causing large effects to happen rather than doing so as a way to meet an emotional need among others. Although being a Maximum Effector may seem crazy and pointless, they are important to consider in any analysis of the future because the long-tailed nature of large effects suggest that those who specifically seek to cause them are likely to have an impact on reality orders of magnitude higher than the impact of agents who try to simultaneously have both large and good effects.

12548867_959047377520753_7170609996809284531_n

Sasha Shulgin

Super-Shulgin Academy (coined by David Pearce; ref: 12, 3, 4, 5, 6, 7, 8): This is a hypothetical future intellectual society that investigates consciousness empirically. Rather than merely theorizing about it or having people from the general population describe their odd experiences, the Super-Shulgin Academy directly studies the state-space of consciousness by putting the brightest minds on the task. The Super-Shulgin Academy (1) trains high-quality consciousness researchers and psychonauts, (2) investigates the computational trade-offs between different states of consciousness, (3) finds new socially-useful applications for exotic states of consciousness, (4) practices the art and craft of creating ultra-blissful experiences, and (5) develops and maintains a full-stack memeplex that incorporates the latest insights about the state-space of consciousness into the most up-to-date Theory of Everything.

Get-Out-Of-Hell-Free Necklace

An approach to doing good is to come up with a metric for what constitutes good or bad, and then trying to do things that will optimally increase or decrease such metric, as the case may be.

If you do this, you have to be careful about what metric you choose.

If you have an ontology where you measure good by “number of people who feel benefited by you”, you might end up doing things like sending everyone you can a doughnut with a signed note. If instead your metric is “number of people classified as poor” you might do best to focus on interventions that get people just over the hump of poverty as defined by your scale. And so on.

Conscientious and systematic altruists tend to see problems with metrics like those above. They realize that “people impressed” and “being poor according to an economic metric” are not metrics that really carve nature at its joints.

Dissatisfied with misleading metrics, one then tends to look closer at the world and arrive at metrics that take into account the length of different lives, their quality, their instrumental effect in the world, how much are they exactly being benefited by the intervention relative to other cost-effective alternatives, and so on. And that’s how you get things like Quality Adjusted Life-Years (QALY), micromorts, and the happiness index.

This is, I think, all moving in the right direction. Metrics that make an effort to carve nature at its joints can provide new lenses to see the world. And looking through those lenses tends to generate novel angles and approaches to do a lot of good.


earth-683436_960_720

This is why today I will suggest we consider a new metric: The Hell-Index.

A country’s Hell-Index could be defined as the yearly total of people-seconds in pain and suffering that are at or above 20 in the McGill Pain Index (or equivalent)*. This index captures the intuition that intense suffering can be in some ways qualitatively different and more serious than lesser suffering in a way that isn’t really captured by a linear pain scale.

What does this metric suggest we should do to make the world better? Here is an idea (told as if narrated from the future):


Between 2030 and 2050 it was very common for people to wear Get-Out-Of-Hell-Free Necklaces. People had an incredible variety of custom-fit aesthetic and practical additives to their necklaces. But in every single one of them, you could rest assured, you would find a couple of doses of each of these agents:

  1. N,N-DMT (in case of Cluster Headaches)
  2. Quetiapine (in case of severe acute psychosis)
  3. Benzocaine + menthol (for very painful stings)
  4. Ketamine (for severe suicidal feelings)
  5. Microdosed Ibogaine + cocktail of partial mu-opioid agonists (for acute severe physical pain and panic attack, e.g.. kidney stones)

Some other people would get additional things like:

  1. Beta blocker (to take right after a traumatic event)
  2. Agmatine (to take in case you suspect of having being brainwashed recently), and
  3. Caffeine (if you absolutely need to operate heavy machinery and you are sleep-deprived)

In all cases, the antidote needed would be administered as soon as requested by the wearer. And the wearer would request the antidote as indicated by a very short test done with an app to determine the need for it.

But why? What’s this all about?

The Get-Out-Of-Hell-Free Necklace contents were chosen based on a cost-benefit analysis for how to reduce the world’s Hell-Index as much as possible. Cluster-headaches, kidney stones, bad stings, severe psychotic episodes, suicidal depression, panic attacks, and many types of acute physical pain turned out to account for a surprisingly large percentage of each country’s Hell-Index. And in many of these cases, a substantial amount of the suffering was experienced before medical help could be able to arrive to the scene and do anything about it. A lot of that intense suffering happened to be tightly concentrated in acute episodes rather than in chronic problems (save for some notable examples). And by incredible luck, it turned out that there were simple antidotes to most of these states of agony, all of them small enough to fit in a single light necklace. So it was determined that subsidizing Get-Out-Of-Hell-Free Necklaces was a no-brainer as a cost-effective altruistic intervention.


By 2050 safe and cheap genetic vaccines against almost all of these unpleasant states of consciousness had been discovered. This, in turn, made the use of the Get-Out-Of-Hell-Free Necklaces unnecessary. But many who benefited from it- who had been unlucky enough to have needed it- kept it on for many years. The piece was thought of as a symbol to commemorate humanity’s progress in the destruction of hell. An achievement certainly worth celebrating.



* Admittedly, a more refined index would also distinguish between the intensity of different types of pain/suffering above 20 in the McGill Pain Index (or equivalent). Such index would try to integrate a fair “total amount of hellish qualia” by adding up the pain of each state weighted by its most likely “true intensity” as determined by a model, and then do so for each model you have and weight the contribution of each model by its likelihood. E.g. do both a quadratic and an exponential conversion of values in the 0 to 10 visual analogue scale into dolors per second, and then do a likelihood-weighted average to combine those results into a final value.

AI Alignment Podcast: On Consciousness, Qualia, and Meaning with Mike Johnson and Andrés Gómez Emilsson

Lucas Perry from the Future of Life Institute recently interviewed my co-founder Mike Johnson and I in his AI Alignment podcast. Here is the full transcript:


Lucas: Hey, everyone. Welcome back to the AI Alignment Podcast. I’m Lucas Perry, and today we’ll be speaking with Andrés Gomez Emilsson and Mike Johnson from the Qualia Research Institute. In this episode, we discuss the Qualia Research Institute’s mission and core philosophy. We get into the differences between and arguments for and against functionalism and qualia realism. We discuss definitions of consciousness, how consciousness might be causal, we explore Marr’s Levels of Analysis, we discuss the Symmetry Theory of Valence. We also get into identity and consciousness, and the world, the is-out problem, what this all means for AI alignment and building beautiful futures.

And then end on some fun bits, exploring the potentially large amounts of qualia hidden away in cosmological events, and whether or not our universe is something more like heaven or hell. And remember, if you find this podcast interesting or useful, remember to like, comment, subscribe, and follow us on your preferred listening platform. You can continue to help make this podcast better by participating in a very short survey linked in the description of wherever you might find this podcast. It really helps. Andrés is a consciousness researcher at QRI and is also the Co-founder and President of the Stanford Transhumanist Association. He has a Master’s in Computational Psychology from Stanford. Mike is Executive Director at QRI and is also a co-founder.

He is interested in neuroscience, philosophy of mind, and complexity theory. And so, without further ado, I give you Mike Johnson and Andrés Gomez Emilsson. So, Mike and Andrés, thank you so much for coming on. Really excited about this conversation and there’s definitely a ton for us to get into here.

Andrés: Thank you so much for having us. It’s a pleasure.

Mike: Yeah, glad to be here.

Lucas: Let’s start off just talking to provide some background about the Qualia Research Institute. If you guys could explain a little bit, your perspective of the mission and base philosophy and vision that you guys have at QRI. If you could share that, that would be great.

Andrés: Yeah, for sure. I think one important point is there’s some people that think that really what matters might have to do with performing particular types of algorithms, or achieving external goals in the world. Broadly speaking, we tend to focus on experience as the source of value, and if you assume that experience is a source of value, then really mapping out what is the set of possible experiences, what are their computational properties, and above all, how good or bad they feel seems like an ethical and theoretical priority to actually make progress on how to systematically figure out what it is that we should be doing.

Mike: I’ll just add to that, this thing called consciousness seems pretty confusing and strange. We think of it as pre-paradigmatic, much like alchemy. Our vision for what we’re doing is to systematize it and to do to consciousness research what chemistry did to alchemy.

Lucas: To sort of summarize this, you guys are attempting to be very clear about phenomenology. You want to provide a formal structure for understanding and also being able to infer phenomenological states in people. So you guys are realists about consciousness?

Mike: Yes, absolutely.

Lucas: Let’s go ahead and lay some conceptual foundations. On your website, you guys describe QRI’s full stack, so the kinds of metaphysical and philosophical assumptions that you guys are holding to while you’re on this endeavor to mathematically capture consciousness.

Mike: I would say ‘full stack’ talks about how we do philosophy of mind, we do neuroscience, and we’re just getting into neurotechnology with the thought that yeah, if you have a better theory of consciousness, you should be able to have a better theory about the brain. And if you have a better theory about the brain, you should be able to build cooler stuff than you could otherwise. But starting with the philosophy, there’s this conception of qualia of formalism; the idea that phenomenology can be precisely represented mathematically. You borrow the goal from Giulio Tononi’s IIT. We don’t necessarily agree with the specific math involved, but the goal of constructing a mathematical object that is isomorphic to a systems phenomenology would be the correct approach if you want to formalize phenomenology.

And then from there, one of the big questions in how you even start is, what’s the simplest starting point? And here, I think one of our big innovations that is not seen at any other research group is we’ve started with emotional valence and pleasure. We think these are not only very ethically important, but also just literally the easiest place to start reverse engineering.

Lucas: Right, and so this view is also colored by physicalism and quality of structuralism and valence realism. Could you explain some of those things in a non-jargony way?

Mike: Sure. Quality of formalism is this idea that math is the right language to talk about qualia in, and that we can get a precise answer. This is another way of saying that we’re realists about consciousness much as people can be realists about electromagnetism. We’re also valence realists. This refers to how we believe emotional valence, or pain and pleasure, the goodness or badness of an experience. We think this is a natural kind. This concept carves reality at the joints. We have some further thoughts on how to define this mathematically as well.

Lucas: So you guys are physicalists, so you think that basically the causal structure of the world is best understood by physics and that consciousness was always part of the game engine of the universe from the beginning. Ontologically, it was basic and always there in the same sense that the other forces of nature were already in the game engine since the beginning?

Mike: Yeah, I would say so. I personally like the frame of dual aspect monism, but I would also step back a little bit and say there’s two attractors in this discussion. One is the physicalist attractor, and that’s QRI. Another would be the functionalist/computationalist attractor. I think a lot of AI researchers are in this attractor and this is a pretty deep question of, if we want to try to understand what value is, or what’s really going on, or if we want to try to reverse engineer phenomenology, do we pay attention to bits or atoms? What’s more real; bits or atoms?

Lucas: That’s an excellent question. Scientific reductionism here I think is very interesting. Could you guys go ahead and unpack though the skeptics position of your view and broadly adjudicate the merits of each view?

Andrés: Maybe a really important frame here is called Marr’s Levels of Analyses. David Marr was a cognitive scientist, wrote a really influential book in the ’80s called On Vision where he basically creates a schema for how to understand knowledge about, in this particular case, how you actually make sense of the world visually. The framework goes as follows: you have three ways in which you can describe an information processing system. First of all, the computational/behavioral level. What that is about is understanding the input-output mapping of an information processing system. Part of it is also understanding the run-time complexity of the system and under what conditions it’s able to perform its actions. Here an analogy would be with an abacus, for example.

On the computational/behavioral level, what an abacus can do is add, subtract, multiply, divide, and if you’re really creative you can also exponentiate and do other interesting things. Then you have the algorithmic level of analysis, which is a little bit more detailed, and in a sense more constrained. What the algorithm level of analysis is about is figuring out what are the internal representations and possible manipulations of those representations such that you get the input output of mapping described by the first layer. Here you have an interesting relationship where understanding the first layer doesn’t fully constrain the second one. That is to say, there are many systems that have the same input output mapping but that under the hood uses different algorithms.

In the case of the abacus, an algorithm might be something whenever you want to add a number you just push a bead. Whenever you’re done with a row, you push all of the beads backs and then you add a bead in the row underneath. And finally, you have the implementation level of analysis, and that is, what is the system actually made of? How is it constructed? All of these different levels ultimately also map onto different theories of consciousness, and that is basically where in the stack you associate consciousness, or being, or “what matters”. So, for example, behaviorists in the ’50s, they may associate consciousness, if they give any credibility to that term, with the behavioral level. They don’t really care what’s happening inside as long as you have extended pattern of reinforcement learning over many iterations.

What matters is basically how you’re behaving and that’s the crux of who you are. A functionalist will actually care about what algorithms you’re running, how is it that you’re actually transforming the input into the output. Functionalists generally do care about, for example, brain imaging, they do care about the high level algorithms that the brain is running, and generally will be very interested in figuring out these algorithms and generalize them in fields like machine learning and digital neural networks and so on. A physicalist associate consciousness at the implementation level of analysis. How the system is physically constructed, has bearings on what is it like to be that system.

Lucas: So, you guys haven’t said that this was your favorite approach, but if people are familiar with David Chalmers, these seem to be the easy problems, right? And functionalists are interested in just the easy problems and some of them will actually just try to explain consciousness away, right?

Mike: Yeah, I would say so. And I think to try to condense some of the criticism we have of functionalism, I would claim that it looks like a theory of consciousness and can feel like a theory of consciousness, but it may not actually do what we need a theory of consciousness to do; specify which exact phenomenological states are present.

Lucas: Is there not some conceptual partitioning that we need to do between functionalists who believe in qualia or consciousness, and those that are illusionists or want to explain it away or think that it’s a myth?

Mike: I think that there is that partition, and I guess there is a question of how principled the partition you can be, or whether if you chase the ideas down as far as you can, the partition collapses. Either consciousness is a thing that is real in some fundamental sense and I think you can get there with physicalism, or consciousness is more of a process, a leaky abstraction. I think functionalism naturally tugs in that direction. For example, Brian Tomasik has followed this line of reasoning and come to the conclusion of analytic functionalism, which is trying to explain away consciousness.

Lucas: What is your guys’s working definition of consciousness and what does it mean to say that consciousness is real.

Mike: It is a word that’s overloaded. It’s used in many contexts. I would frame it as what it feels like to be something, and something is conscious if there is something it feels like to be that thing.

Andrés: It’s important also to highlight some of its properties. As Mike pointed out, “consciousness” is used in many different ways. There’s like eight definitions for the word consciousness, and honestly, all of them are really interesting. Some of them are more fundamental than others and we tend to focus on the more fundamental side of the spectrum for the word. A sense that would be very not fundamental would be consciousness in the sense of social awareness or something like that. We actually think of consciousness much more in terms of qualia; what is it like to be something? What is it like to exist? Some of the key properties of consciousness are as follows: First of all, we do think it exists.

Second, in some sense it has causal power in the sense that the fact that we are conscious matters for evolution, evolution made us conscious for a reason that it’s actually doing some computational legwork that would be maybe possible to do, but just not as efficient or not as conveniently as it is possible with consciousness. Then also you have the property of qualia, the fact that we can experience sights, and colors, and tactile sensations, and thoughts experiences, and emotions, and so on, and all of these are in completely different worlds, and in a sense they are, but they have the property that they can be part of a unified experience that can experience color at the same time as experiencing sound. That sends those different types of sensations, we describe them as the category of consciousness because they can be experienced together.

And finally, you have unity, the fact that you have the capability of experiencing many qualia simultaneously. That’s generally a very strong claim to make, but we think you need to acknowledge and take seriously its unity.

Lucas: What are your guys’s intuition pumps for thinking why consciousness exists as a thing? Why is there a qualia?

Andrés: There’s the metaphysical question of why consciousness exists to begin within. That’s something I would like to punt for the time being. There’s also the question of why was it recruited for information processing purposes in animals? The intuition here is that there are various contrasts that you can have within experience, which can serve a computational role. So, there may be a very deep reason why color qualia or visual qualia is used for information processing associated with sight, and why tactile qualia is associated with information processing useful for touching and making haptic representations, and that might have to do with the actual map of how all the qualia values are related to each other. Obviously, you have all of these edge cases, people who are seeing synesthetic.

They may open their eyes and they experience sounds associated with colors, and people tend to think of those as abnormal. I would flip it around and say that we are all synesthetic, it’s just that the synesthesia that we have in general is very evolutionarily adaptive. The reason why you experience colors when you open your eyes is that that type of qualia is really well suited to represent geometrically a projective space. That’s something that naturally comes out of representing the world with the sensory apparatus like eyes. That doesn’t mean that there aren’t other ways of doing it. It’s possible that you could have an offshoot of humans that whenever they opened their eyes, they experience sound and they use that very well to represent the visual world.

But we may very well be in a local maxima of how different types of qualia are used to represent and do certain types of computations in a very well-suited way. It’s like the intuition behind why we’re conscious, is that all of these different contrasts in the structure of the relationship of possible qualia values has computational implications, and there’s actual ways of using this contrast in very computationally effective ways.

Lucas: So, just to channel the functionalist here, wouldn’t he just say that everything you just said about qualia could be fully reducible to input output and algorithmic information processing? So, why do we need this extra property of qualia?

Andrés: There’s this article, I believe is by Brian Tomasik that basically says, flavors of consciousness are flavors of computation. It might be very useful to do that exercise, where basically you identify color qualia as just a certain type of computation and it may very well be that the geometric structure of color is actually just a particular algorithmic structure, that whenever you have a particular type of algorithmic information processing, you get these geometric state-space. In the case of color, that’s a Euclidean three-dimensional space. In the case of tactile or smell qualia, it might be a much more complicated space, but then it’s in a sense implied by the algorithms that we run. There is a number of good arguments there.

The general approach to how to tackle them is that when it comes down to actually defining what algorithms a given system is running, you will hit a wall when you try to formalize exactly how to do it. So, one example is, how do you determine the scope of an algorithm? When you’re analyzing a physical system and you’re trying to identify what algorithm it is running, are you allowed to basically contemplate 1,000 atoms? Are you allowed to contemplate a million atoms? Where is a natural boundary for you to say, “Whatever is inside here can be part of the same algorithm, but whatever is outside of it can’t.” And, there really isn’t a frame-invariant way of making those decisions. On the other hand, if you ask to see a qualia with actual physical states, there is a frame-invariant way of describing what the system is.

Mike: So, a couple of years ago I posted a piece giving a critique of functionalism and one of the examples that I brought up was, if I have a bag of popcorn and I shake the bag of popcorn, did I just torture someone? Did I just run a whole brain emulation of some horrible experience, or did I not? There’s not really an objective way to determine which algorithms a physical system is objectively running. So this is a kind of an unanswerable question from the perspective of functionalism, whereas with the physical theory of consciousness, it would have a clear answer.

Andrés: Another metaphor here is, let’s say you’re at a park enjoying an ice cream. In this system that I created that has, let’s say isomorphic algorithms to whatever is going on in your brain, the particular algorithms that your brain is running in that precise moment within a functionalist paradigm maps onto a metal ball rolling down one of the paths within these machine in a straight line, not touching anything else. So there’s actually not much going on. According to functionalism, that would have to be equivalent and it would actually be generating your experience. Now the weird thing there is that you could actually break the machine, you could do a lot of things and the behavior of the ball would not change.

Meaning that within functionalism, and to actually understand what a system is doing, you need to understand the counter-factuals of the system. You need to understand, what would the system be doing if the input had been different? And all of a sudden, you end with this very, very gnarly problem of defining, well, how do you actually objectively decide what is the boundary of the system? Even some of these particular states that allegedly are very complicated, the system looks extremely simple, and you can remove a lot of parts without actually modifying its behavior. Then that casts in question whether there is an objective boundary, any known arbitrary boundary that you can draw around the system and say, “Yeah, this is equivalent to what’s going on in your brain,” right now.

This has a very heavy bearing on the binding problem. The binding problem for those who haven’t heard of it is basically, how is it possible that 100 billion neurons just because they’re skull-bound, spatially distributed, how is it possible that they simultaneously contribute to a unified experience as opposed to, for example, neurons in your brain and neurons in my brain contributing to a unified experience? You hit a lot of problems like what is the speed of propagation of information for different states within the brain? I’ll leave it at that for the time being.

Lucas: I would just like to be careful about this intuition here that experience is unified. I think that the intuition pump for that is direct phenomenological experience like experience seems unified, but experience also seems a lot of different ways that aren’t necessarily descriptive of reality, right?

Andrés: You can think of it as different levels of sophistication, where you may start out with a very naive understanding of the world, where you confuse your experience for the world itself. A very large percentage of people perceive the world and in a sense think that they are experiencing the world directly, whereas all the evidence indicates that actually you’re experiencing an internal representation. You can go and dream, you can hallucinate, you can enter interesting meditative states, and those don’t map to external states of the world.

There’s this transition that happens when you realize that in some sense you’re experiencing a world simulation created by your brain, and of course, you’re fooled by it in countless ways, especially when it comes to emotional things that we look at a person and we might have an intuition of what type of person that person is, and that if we’re not careful, we can confuse our intuition, we can confuse our feelings with truth as if we were actually able to sense their souls, so to speak, rather than, “Hey, I’m running some complicated models on people-space and trying to carve out who they are.” There’s definitely a lot of ways in which experience is very deceptive, but here I would actually make an important distinction.

When it comes to intentional content, and intentional content is basically what the experience is about, for example, if you’re looking at a chair, there’s the quality of chairness, the fact that you understand the meaning of chair and so on. That is usually a very deceptive part of experience. There’s another way of looking at experience that I would say is not deceptive, which is the phenomenal character of experience; how it presents itself. You can be deceived about basically what the experience is about, but you cannot be deceived about how you’re having the experience, how you’re experiencing it. You can infer based on a number of experiences that the only way for you to even actually experience a given phenomenal object is to incorporate a lot of that information into a unified representation.

But also, if you just pay attention to your experience that you can simultaneously place your attention in two spots of your visual field and make them harmonized. That’s a phenomenal character and I would say that there’s a strong case to be made to not doubt that property.

Lucas: I’m trying to do my best to channel the functionalist. I think he or she would say, “Okay, so what? That’s just more information processing, and i’ll bite the bullet on the binding problem. I still need some more time to figure that out. So what? It seems like these people who believe in qualia have an even tougher job of trying to explain this extra spooky quality in the world that’s different from all the other physical phenomenon that science has gone into.” It also seems to violate Occam’s razor or a principle of lightness where one’s metaphysics or ontology would want to assume the least amount of extra properties or entities in order to try to explain the world. I’m just really trying to tease out your best arguments here for qualia realism as we do have this current state of things in AI alignment where most people it seems would either try to explain away consciousness, would say it’s an illusion, or they’re anti-realist about qualia.

Mike: That’s a really good question, a really good frame. And I would say our strongest argument revolves around predictive power. Just like centuries ago, you could absolutely be a skeptic about, shall we say, electromagnetism realism. And you could say, “Yeah, I mean there is this thing we call static, and there’s this thing we call lightning, and there’s this thing we call load stones or magnets, but all these things are distinct. And to think that there’s some unifying frame, some deep structure of the universe that would tie all these things together and highly compress these phenomenon, that’s crazy talk.” And so, this is a viable position today to say that about consciousness, that it’s not yet clear whether consciousness has deep structure, but we’re assuming it does, and we think that unlocks a lot of predictive power.

We should be able to make predictions that are both more concise and compressed and crisp than others, and we should be able to make predictions that no one else can.

Lucas: So what is the most powerful here about what you guys are doing? Is it the specific theories and assumptions which you take are falsifiable?

Mike: Yeah.

Lucas: If we can make predictive assessments of these things, which are either leaky abstractions or are qualia, how would we even then be able to arrive at a realist or anti-realist view about qualia?

Mike: So, one frame on this is, it could be that one could explain a lot of things about observed behavior and implicit phenomenology through a purely functionalist or computationalist lens, but maybe for a given system it might take 10 terabytes. And if you can get there in a much simpler way, if you can explain it in terms of three elegant equations instead of 10 terabytes, then it wouldn’t be proof that there exists some crystal clear deep structure at work. But it would be very suggestive. Marr’s Levels of Analysis are pretty helpful here, where a functionalist might actually be very skeptical of consciousness mattering at all because it would say, “Hey, if you’re identifying consciousness at the implementation level of analysis, how could that have any bearing on how we are talking about, how we understand the world, how we’d behave?

Since the implementational level is kind of epiphenomenal from the point of view of the algorithm. How can an algorithm know its own implementation, all it can maybe figure out its own algorithm, and it’s identity would be constrained to its own algorithmic structure.” But that’s not quite true. In fact, there is bearings on one level of analysis onto another, meaning in some cases the implementation level of analysis doesn’t actually matter for the algorithm, but in some cases it does. So, if you were implementing a computer, let’s say with water, you have the option of maybe implementing a Turing machine with water buckets and in that case, okay, the implementation level of analysis goes out the window in terms of it doesn’t really help you understand the algorithm.

But if how you’re using water to implement algorithms is by basically creating this system of adding waves in buckets of different shapes, with different resonant modes, then the implementation level of analysis actually matters a whole lot for what algorithms are … finely tuned to be very effective in that substrate. In the case of consciousness and how we behave, we do think properties of the substrate have a lot of bearings on what algorithms we actually run. A functionalist should actually start caring about consciousness if the properties of consciousness makes the algorithms more efficient, more powerful.

Lucas: But what if qualia and consciousness are substantive real things? What if the epiphenomenonalist true and is like smoke rising from computation and it doesn’t have any causal efficacy?

Mike: To offer a re-frame on this, I like this frame of dual aspect monism better. There seems to be an implicit value judgment on epiphenomenalism. It’s seen as this very bad thing if a theory implies qualia as epiphenomenal. Just to put cards on the table, I think Andrés and I differ a little bit on how we see these things, although I think our ideas also mesh up well. But I would say that under the frame of something like dual aspect monism, that there’s actually one thing that exists, and it has two projections or shadows. And one projection is the physical world such as we can tell, and then the other projection is phenomenology, subjective experience. These are just two sides of the same coin and neither is epiphenomenal to the other. It’s literally just two different angles on the same thing.

And in that sense, qualia values and physical values are really talking about the same thing when you get down to it.

Lucas: Okay. So does this all begin with this move that Descartes makes, where he tries to produce a perfectly rational philosophy or worldview by making no assumptions and then starting with experience? Is this the kind of thing that you guys are doing in taking consciousness or qualia to be something real or serious?

Mike: I can just speak for myself here, but I would say my intuition comes from two places. One is staring deep into the beast of functionalism and realizing that it doesn’t lead to a clear answer. My model is that it just is this thing that looks like an answer but can never even in theory be an answer to how consciousness works. And if we deny consciousness, then we’re left in a tricky place with ethics and moral value. It also seems to leave value on the table in terms of predictions, that if we can assume consciousness as real and make better predictions, then that’s evidence that we should do that.

Lucas: Isn’t that just an argument that it would be potentially epistemically useful for ethics if we could have predictive power about consciousness?

Mike: Yeah. So, let’s assume that it’s 100 years, or 500 years, or 1,000 years in the future, and we’ve finally cracked consciousness. We’ve finally solved it. My open question is, what does the solution look like? If we’re functionalists, what does the solution look like? If we’re physicalists, what does the solution look like? And we can expand this to ethics as well.

Lucas: Just as a conceptual clarification, the functionalists are also physicalists though, right?

Andrés: There is two senses of the word physicalism here. So if there’s physicalism in the sense of like a theory of the universe, that the behavior of matter and energy, what happens in the universe is exhaustively described by the laws of physics, or future physics, there is also physicalism in the sense of understanding consciousness in contrast to functionalism. David Pearce, I think, would describe it as non-materialist physicalist idealism. There’s definitely a very close relationship between that phrasing and dual aspect monism. I can briefly unpack it. Basically non materialist is not saying that the stuff of the world is fundamentally unconscious. That’s something that materialism claims, that what the world is made of is not conscious, is raw matter so to speak.

Andrés: Physicalist, again in the sense of the laws of physics exhaustively describe behavior and idealist in the sense of what makes up the world is qualia or consciousness. The big picture view is that the actual substrate of the universe of quantum fields are fields of qualia.

Lucas: So Mike, you were saying that in the future when we potentially have a solution to the problem of consciousness, that in the end, the functionalists with algorithms and explanations of say all of the easy problems, all of the mechanisms behind the things that we call consciousness, you think that that project will ultimately fail?

Mike: I do believe that, and I guess my gentle challenge to functionalists would be to sketch out a vision of what a satisfying answer to consciousness would be, whether it’s completely explaining it a way or completely explaining it. If in 500 years you go to the local bookstore and you check out consciousness 101, and just flip through it, you look at the headlines and the chapter list and the pictures, what do you see? I think we have an answer as formalists, but I would be very interested in getting the functionalists state on this.

Lucas: All right, so you guys have this belief in the ability to formalize our understanding of consciousness, is this actually contingent on realism or anti realism?

Mike: It is implicitly dependent on realism, that consciousness is real enough to be describable mathematically in a precise sense. And actually that would be my definition of realism, that something is real if we can describe it exactly with mathematics and it is instantiated in the universe. I think the idea of connecting math and consciousness is very core to formalism.

Lucas: What’s particularly interesting here are the you’re making falsifiable claims about phenomenological states. It’s good and exciting that your Symmetry Theory of Valence, which we can get into now has falsifiable aspects. So do you guys want to describe here your Symmetry Theory of Valence and how this fits in and as a consequence of your valence realism?

Andrés: Sure, yeah. I think like one of the key places where this has bearings on is and understanding what is it that we actually want and what is it that we actually like and enjoy. That will be answered in an agent way. So basically you think of agents as entities who spin out possibilities for what actions to take and then they have a way of sorting them by expected utility and then carrying them out. A lot of people may associate what we want or what we like or what we care about at that level, the agent level, whereas we think actually the true source of value is more low level than that. That there’s something else that we’re actually using in order to implement agentive behavior. There’s ways of experiencing value that are completely separated from agents. You don’t actually need to be generating possible actions and evaluating them and enacting them for there to be value or for you to actually be able to enjoy something.

So what we’re examining here is actually what is the lower level property that gives rise even to agentive behavior that underlies every other aspect of experience. These would be a valence and specifically valence gradients. The general claim is that we are set up in such a way that we are basically climbing the valence gradient. This is not true in every situation, but it’s mostly true and it’s definitely mostly true in animals. And then the question becomes what implements valence gradients. Perhaps your intuition is this extraordinary fact that things that have nothing to do with our evolutionary past nonetheless can feel good or bad. So it’s understandable that if you hear somebody scream, you may get nervous or anxious or fearful or if you hear somebody laugh you may feel happy.

That makes sense from an evolutionary point of view, but why would the sound of the Bay Area Rapid Transit, the Bart, which creates these very intense screeching sounds, that is not even within like the vocal range of humans, it’s just really bizarre, never encountered before in our evolutionary past and nonetheless, it has an extraordinarily negative valence. That’s like a hint that valence has to do with patterns, it’s not just goals and actions and utility functions, but the actual pattern of your experience may determine valence. The same goes for a SUBPAC, is this technology that basically renders sounds between 10 and 100 hertz and some of them feel really good, some of them feel pretty unnerving, some of them are anxiety producing and it’s like why would that be the case? Especially when you’re getting two types of input that have nothing to do with our evolutionary past.

It seems that there’s ways of triggering high and low valence states just based on the structure of your experience. The last example I’ll give is very weird states of consciousness like meditation or psychedelics that seem to come with extraordinarily intense and novel forms of experiencing significance or a sense of bliss or pain. And again, they don’t seem to have much semantic content per se or rather the semantic content is not the core reason why they feel that they’re bad. It has to do more with a particular structure that they induce in experience.

Mike: There are many ways to talk about where pain and pleasure come from. We can talk about it in terms of neuro chemicals, opioids, dopamine. We can talk about it in terms of pleasure centers in the brain, in terms of goals and preferences and getting what you want, but all these have counterexamples. All of these have some points that you can follow the thread back to which will beg the question. I think the only way to explain emotional valence, pain and pleasure, that doesn’t beg the question is to explain it in terms of some patterns within phenomenology, just intrinsically feel good and some intrinsically feel bad. To touch back on the formalism brain, this would be saying that if we have a mathematical object that is isomorphic to your phenomenology, to what it feels like to be you, then some pattern or property of this object will refer to or will sort of intrinsically encode you are emotional valence, how pleasant or unpleasant this experiences.

That’s the valence formalism aspect that we’ve come to.

Lucas: So given the valence realism, the view is this intrinsic pleasure, pain axis of the world and this is sort of challenging I guess David Pearce’s view. There are things in experience which are just clearly good seeming or bad seeming. Will MacAskill called these pre theoretic properties we might ascribe to certain kinds of experiential aspects, like they’re just good or bad. So with this valence realism view, this potentiality in this goodness or badness whose nature is sort of self intimatingly disclosed in the physics and in the world since the beginning and now it’s unfolding and expressing itself more so and the universe is sort of coming to life, and embedded somewhere deep within the universe’s structure are these intrinsically good or intrinsically bad valances which complex computational systems and maybe other stuff has access to.

Andrés: Yeah, yeah, that’s right. And I would perhaps emphasize that it’s not only pre-theoretical, it’s pre-agentive, you don’t even need an agent for there to be valence.

Lucas: Right. Okay. This is going to be a good point I think for getting into these other more specific hairy philosophical problems. Could you go ahead and unpack a little bit more this view that pleasure or pain is self intimatingly good or bad that just by existing and experiential relation with the thing its nature is disclosed. Brian Tomasik here, and I think functionalists would say there’s just another reinforcement learning algorithm somewhere before that is just evaluating these phenomenological states. They’re not intrinsically or bad, that’s just what it feels like to be the kind of agent who has that belief.

Andrés: Sure. There’s definitely many angles from which to see this. One of them is by basically realizing that liking, wanting and learning are possible to dissociate, and in particular you’re going to have reinforcement without an associated positive valence. You can have also positive valence without reinforcement or learning. Generally they are correlated but they are different things. My understanding is a lot of people who may think of valence as something we believe matters because you are the type of agent that has a utility function and a reinforcement function. If that was the case, we would expect valence to melt away in states that are non agentive, we wouldn’t necessarily see it. And also that it would be intrinsically tied to intentional content, the aboutness of experience. A very strong counter example is that somebody may claim that really what they truly want this to be academically successful or something like that.

They think of the reward function as intrinsically tied to getting a degree or something like that. I would call that to some extent illusory, that if you actually look at how those preferences are being implemented, that deep down there would be valence gradients happening there. One way to show this would be let’s say the person on the graduation day, you give them an opioid antagonist. The person will subjectively feel that the day is meaningless, you’ve removed the pleasant cream of the experience that they were actually looking for, that they thought all along was tied in with intentional content with the fact of graduating but in fact it was the hedonic gloss that they were after, and that’s kind of like one intuition pump part there.

Lucas: These core problem areas that you’ve identified in Principia Qualia, would you just like to briefly touch on those?

Mike: Yeah, trying to break the problem down into modular pieces with the idea that if we can decompose the problem correctly then the sub problems become much easier than the overall problem and if you collect all the solutions to the sub problem than in aggregate, you get a full solution to the problem of consciousness. So I’ve split things up into the metaphysics, the math and the interpretation. The first question is what metaphysics do you even start with? What ontology do you even try to approach the problem? And we’ve chosen the ontology of physics that can objectively map onto reality in a way that computation can not. Then there’s this question of, okay, so you have your core ontology in this case physics, and then there’s this question of what counts, what actively contributes to consciousness? Do we look at electrons, electromagnetic fields, quarks?

This is an unanswered question. We have hypotheses but we don’t have an answer. Moving into the math, conscious system seemed to have boundaries, if something’s happening inside my head it can directly contribute to my conscious experience. But even if we put our heads together, literally speaking, your consciousness doesn’t bleed over into mine, there seems to be a boundary. So one way of framing this is the boundary problem and one way it’s framing it is the binding problem, and these are just two sides of the same coin. There’s this big puzzle of how do you draw the boundaries of a subject experience. IIT is set up to approach consciousness in itself through this lens that has a certain style of answer, style of approach. We don’t necessarily need to take that approach, but it’s a intellectual landmark. Then we get into things like the state-space problem and the topology of information problem.

If we figured out our basic ontology of what we think is a good starting point and of that stuff, what actively contributes to consciousness, and then we can figure out some principled way to draw a boundary around, okay, this is conscious experience A and this conscious experience B, and they don’t overlap. So you have a bunch of the information inside the boundary. Then there’s this math question of how do you rearrange it into a mathematical object that is isomorphic to what that stuff feels like. And again, IIT has an approach to this, we don’t necessarily ascribe to the exact approach but it’s good to be aware of. There’s also the interpretation problem, which is actually very near and dear to what QRI is working on and this is the concept of if you had a mathematical object that represented what it feels like to be you, how would we even start to figure out what it meant?

Lucas: This is also where the falsifiability comes in, right? If we have the mathematical object and we’re able to formally translate that into phenomenological states, then people can self report on predictions, right?

Mike: Yes. I don’t necessarily fully trust self reports as being the gold standard. I think maybe evolution is tricky sometimes and can lead to inaccurate self report, but at the same time it’s probably pretty good, and it’s the best we have for validating predictions.

Andrés: A lot of this gets easier if we assume that maybe we can be wrong in an absolute sense but we’re often pretty well calibrated to judge relative differences. Maybe you ask me how I’m doing on a scale of one to ten and I say seven and the reality is a five, maybe that’s a problem, but at the same time I like chocolate and if you give me some chocolate and I eat it and that improves my subjective experience and I would expect us to be well calibrated in terms of evaluating whether something is better or worse.

Lucas: There’s this view here though that the brain is not like a classical computer, that it is more like a resonant instrument.

Mike: Yeah. Maybe an analogy here it could be pretty useful. There’s this researcher William Sethares who basically figured out the way to quantify the mutual dissonance between pairs of notes. It turns out that it’s not very hard, all you need to do is add up the pairwise dissonance between every harmonic of the notes. And what that gives you is that if you take for example a major key and you compute the average dissonance between pairs of notes within that major key it’s going to be pretty good on average. And if you take the average dissonance of a minor key it’s going to be higher. So in a sense what distinguishes the minor and a major key is in the combinatorial space of possible permutations of notes, how frequently are they dissonant versus consonant.

That’s a very ground truth mathematical feature of a musical instrument and that’s going to be different from one instrument to the next. With that as a backdrop, we think of the brain and in particular valence in a very similar light that the brain has natural resonant modes and emotions may seem externally complicated. When you’re having a very complicated emotion and we ask you to describe it it’s almost like trying to describe a moment in a symphony, this very complicated composition and how do you even go about it. But deep down the reason why a particular frame sounds pleasant or unpleasant within music is ultimately tractable to the additive per wise dissonance of all of those harmonics. And likewise for a given state of consciousness we suspect that very similar to music the average pairwise dissonance between the harmonics present on a given point in time will be strongly related to how unpleasant the experience is.

These are electromagnetic waves and it’s not exactly like a static or it’s not exactly a standing wave either, but it gets really close to it. So basically what this is saying is there’s this excitation inhibition wave function and that happens statistically across macroscopic regions of the brain. There’s only a discrete number of ways in which that way we can fit an integer number of times in the brain. We’ll give you a link to the actual visualizations for what this looks like. There’s like a concrete example, one of the harmonics with the lowest frequency is basically a very simple one where interviewer hemispheres are alternatingly more excited versus inhibited. So that will be a low frequency harmonic because it is very spatially large waves, an alternating pattern of excitation. Much higher frequency harmonics are much more detailed and obviously hard to describe, but visually generally speaking, the spatial regions that are activated versus inhibited are these very thin wave fronts.

It’s not a mechanical wave as such, it’s a electromagnetic wave. So it would actually be the electric potential in each of these regions of the brain fluctuates, and within this paradigm on any given point in time you can describe a brain state as a weighted sum of all of its harmonics, and what that weighted sum looks like depends on your state of consciousness.

Lucas: Sorry, I’m getting a little caught up here on enjoying resonant sounds and then also the valence realism. The view isn’t that all minds will enjoy resonant things because happiness is like a fundamental valence thing of the world and all brains who come out of evolution should probably enjoy resonance.

Mike: It’s less about the stimulus, it’s less about the exact signal and it’s more about the effect of the signal on our brains. The resonance that matters, the resonance that counts, or the harmony that counts we’d say, or in a precisely technical term, the consonance that counts is the stuff that happens inside our brains. Empirically speaking most signals that involve a lot of harmony create more internal consonance in these natural brain harmonics than for example, dissonant stimuli. But the stuff that counts is inside the head, not the stuff that is going in our ears.

Just to be clear about QRI’s move here, Selen Atasoy has put forth this connectome-specific harmonic wave model and what we’ve done is combined it with our symmetry theory of valence and said this is sort of a way of basically getting a Fourier transform of where the energy is in terms of frequencies of brainwaves in a much cleaner way than has been available through EEG. Basically we can evaluate this data set for harmony. How much harmony is there in a brain, with the link to the Symmetry Theory of Valence then it should be a very good proxy for how pleasant it is to be that brain.

Lucas: Wonderful.

Andrés: In this context, yeah, the Symmetry Theory of Valence would be much more fundamental. There’s probably many ways of generating states of consciousness that are in a sense completely unnatural that are not based on the harmonics of the brain, but we suspect the bulk of the differences in states of consciousness would cash out in differences in brain harmonics because that’s a very efficient way of modulating the symmetry of the state.

Mike: Basically, music can be thought of as a very sophisticated way to hack our brains into a state of greater consonance, greater harmony.

Lucas: All right. People should check out your Principia Qualia, which is the work that you’ve done that captures a lot of this well. Is there anywhere else that you’d like to refer people to for the specifics?

Mike: Principia qualia covers the philosophical framework and the symmetry theory of valence. Andrés has written deeply about this connectome-specific harmonic wave frame and the name of that piece is Quantifying Bliss.

Lucas: Great. I would love to be able to quantify bliss and instantiate it everywhere. Let’s jump in here into a few problems and framings of consciousness. I’m just curious to see if you guys have any comments on ,the first is what you call the real problem of consciousness and the second one is what David Chalmers calls the Meta problem of consciousness. Would you like to go ahead and start off here with just this real problem of consciousness?

Mike: Yeah. So this gets to something we were talking about previously, is consciousness real or is it not? Is it something to be explained or to be explained away? This cashes out in terms of is it something that can be formalized or is it intrinsically fuzzy? I’m calling this the real problem of consciousness, and a lot depends on the answer to this. There are so many different ways to approach consciousness and hundreds, perhaps thousands of different carvings of the problem, panpsychism, we have dualism, we have non materialist physicalism and so on. I think essentially the core distinction, all of these theories sort themselves into two buckets, and that’s is consciousness real enough to formalize exactly or not. This frame is perhaps the most useful frame to use to evaluate theories of consciousness.

Lucas: And then there’s a Meta problem of consciousness which is quite funny, it’s basically like why have we been talking about consciousness for the past hour and what’s all this stuff about qualia and happiness and sadness? Why do people make claims about consciousness? Why does it seem to us that there is maybe something like a hard problem of consciousness, why is it that we experience phenomenological states? Why isn’t everything going on with the lights off?

Mike: I think this is a very clever move by David Chalmers. It’s a way to try to unify the field and get people to talk to each other, which is not so easy in the field. The Meta problem of consciousness doesn’t necessarily solve anything but it tries to inclusively start the conversation.

Andrés: The common move that people make here is all of these crazy things that we think about consciousness and talk about consciousness, that’s just any information processing system modeling its own attentional dynamics. That’s one illusionist frame, but even within qualia realist, qualia formalist paradigm, you still have the question of why do we even think or self reflect about consciousness. You could very well think of consciousness as being computationally relevant, you need to have consciousness and so on, but still lacking introspective access. You could have these complicated conscious information processing systems, but they don’t necessarily self reflect on the quality of their own consciousness. That property is important to model and make sense of.

We have a few formalisms that may give rise to some insight into how self reflectivity happens and in particular how is it possible to model the entirety of your state of consciousness in a given phenomenal object. These ties in with the notion of a homonculei, if the overall valence of your consciousness is actually a signal traditionally used for fitness evaluation, detecting basically when are you in existential risk to yourself or when there’s like reproductive opportunities that you may be missing out on, that it makes sense for there to be a general thermostat of the overall experience where you can just look at it and you get a a sense of the overall well being of the entire experience added together in such a way that you experienced them all at once.

I think like a lot of the puzzlement has to do with that internal self model of the overall well being of the experience, which is something that we are evolutionarily incentivized to actually summarize and be able to see at a glance.

Lucas: So, some people have a view where human beings are conscious and they assume everyone else is conscious and they think that the only place for value to reside is within consciousness, and that a world without consciousness is actually a world without any meaning or value. Even if we think that say philosophical zombies or people who are functionally identical to us but with no qualia or phenomenological states or experiential states, even if we think that those are conceivable, then it would seem that there would be no value in a world of p-zombies. So I guess my question is why does phenomenology matter? Why does the phenomenological modality of pain and pleasure or valence have some sort of special ethical or experiential status unlike qualia like red or blue?

Why does red or blue not disclose some sort of intrinsic value in the same way that my suffering does or my bliss does or the suffering or bliss of other people?

Mike: My intuition is also that consciousness is necessary for value. Nick Bostrom has this wonderful quote in super intelligence that we should be wary of building a Disneyland with no children, some technological wonderland that is filled with marvels of function but doesn’t have any subjective experience, doesn’t have anyone to enjoy it basically. I would just say that I think that most AI safety research is focused around making sure there is a Disneyland, making sure, for example, that we don’t just get turned into something like paperclips. But there’s this other problem, making sure there are children, making sure there are subjective experiences around to enjoy the future. I would say that there aren’t many live research threads on this problem and I see QRI as a live research thread on how to make sure there is subject experience in the future.

Probably a can of worms there, but as your question about in pleasure, I may pass that to my colleague Andrés.

Andrés: Nothing terribly satisfying here. I would go with David Pearce’s view that these properties of experience are self intimating and to the extent that you do believe in value, it will come up as the natural focal points for value, especially if you’re allowed to basically probe the quality of your experience where in many states you believe that the reason why you like something is for intentional content. Again, the case of graduating or it could be the the case of getting a promotion or one of those things that a lot of people associate, with feeling great, but if you actually probe the quality of experience, you will realize that there is this component of it which is its hedonic gloss and you can manipulate it directly again with things like opiate antagonists and if the symmetry theory of valence is true, potentially also by directly modulating the consonance and dissonance of the brain harmonics, in which case the hedonic gloss would change in peculiar ways.

When it comes to consilience, when it comes to many different points of view, agreeing on what aspect of the experience is what brings value to it, it seems to be the hedonic gloss.

Lucas: So in terms of qualia and valence realism, would the causal properties of qualia be the thing that would show any arbitrary mind the self-intimating nature of how good or bad an experience is, and in the space of all possible minds, what is the correct epistemological mechanism for evaluating the moral status of experiential or qualitative states?

Mike: So first of all, I would say that my focus so far has mostly been on describing what is and not what ought. I think that we can talk about valence without necessarily talking about ethics, but if we can talk about valence clearly, that certainly makes some questions in ethics and some frameworks in ethics make much more or less than. So the better we can clearly describe and purely descriptively talk about consciousness, the easier I think a lot of these ethical questions get. I’m trying hard not to privilege any ethical theory. I want to talk about reality. I want to talk about what exists, what’s real and what the structure of what exists is, and I think if we succeed at that then all these other questions about ethics and morality get much, much easier. I do think that there is an implicit should wrapped up in questions about valence, but I do think that’s another leap.

You can accept the valence is real without necessarily accepting that optimizing valence is an ethical imperative. I personally think, yes, it is very ethically important, but it is possible to take a purely descriptive frame to valence, that whether or not this also discloses, as David Pearce said, the utility function of the universe. That is another question and can be decomposed.

Andrés: One framing here too is that we do suspect valence is going to be the thing that matters up on any mind if you probe it in the right way in order to achieve reflective equilibrium. There’s the biggest example of a talk and neuro scientist was giving at some point, there was something off and everybody seemed to be a little bit anxious or irritated and nobody knew why and then one of the conference organizers suddenly came up to the presenter and did something to the microphone and then everything sounded way better and everybody was way happier. There was these very sorrow hissing pattern caused by some malfunction of the microphone and it was making everybody irritated, they just didn’t realize that was the source of the irritation, and when it got fixed then you know everybody’s like, “Oh, that’s why I was feeling upset.”

We will find that to be the case over and over when it comes to improving valence. So like somebody in the year 2050 might come up to one of the connectome-specific harmonic wave clinics, “I don’t know what’s wrong with me,” but if you put them through the scanner we identify your 17th and 19th harmonic in a state of dissonance. We cancel 17th to make it more clean, and then the person who will say all of a sudden like, “Yeah, my problem is fixed. How did you do that?” So I think it’s going to be a lot like that, that the things that puzzle us about why do I prefer these, why do I think this is worse, will all of a sudden become crystal clear from the point of view of valence gradients objectively measured.

Mike: One of my favorite phrases in this context is what you can measure you can manage and if we can actually find the source of dissonance in a brain, then yeah, we can resolve it, and this could open the door for maybe honestly a lot of amazing things, making the human condition just intrinsically better. Also maybe a lot of worrying things, being able to directly manipulate emotions may not necessarily be socially positive on all fronts.

Lucas: So I guess here we can begin to jump into AI alignment and qualia. So we’re building AI systems and they’re getting pretty strong and they’re going to keep getting stronger potentially creating a superintelligence by the end of the century and consciousness and qualia seems to be along the ride for now. So I’d like to discuss a little bit here about more specific places in AI alignment where these views might inform it and direct it.

Mike: Yeah, I would share three problems of AI safety. There’s the technical problem, how do you make a self improving agent that is also predictable and safe. This is a very difficult technical problem. First of all to even make the agent but second of all especially to make it safe, especially if it becomes smarter than we are. There’s also the political problem, even if you have the best technical solution in the world and the sufficiently good technical solution doesn’t mean that it will be put into action in a sane way if we’re not in a reasonable political system. But I would say the third problem is what QRI is most focused on and that’s the philosophical problem. What are we even trying to do here? What is the optimal relationship between AI and humanity and also a couple of specific details here. First of all I think nihilism is absolutely an existential threat and if we can find some antidotes to nihilism through some advanced valence technology that could be enormously helpful for reducing X-risk.

Lucas: What kind of nihilism or are you talking about here, like nihilism about morality and meaning?

Mike: Yes, I would say so, and just personal nihilism that it feels like nothing matters, so why not do risky things?

Lucas: Whose quote is it, the philosophers question like should you just kill yourself? That’s the yawning abyss of nihilism inviting you in.

Andrés: Albert Camus. The only real philosophical question is whether to commit suicide, whereas how I think of it is the real philosophical question is how to make love last, bringing value to the existence, and if you have value on tap, then the question of whether to kill yourself or not seems really nonsensical.

Lucas: For sure.

Mike: We could also say that right now there aren’t many good shelling points for global coordination. People talk about having global coordination and building AGI would be a great thing but we’re a little light on the details of how to do that. If the clear, comprehensive, useful, practical understanding of consciousness can be built, then this may sort of embody or generate new shelling points that the larger world could self organize around. If we can give people a clear understanding of what is and what could be, then I think we will get a better future that actually gets built.

Lucas: Yeah. Showing what is and what could be is immensely important and powerful. So moving forward with AI alignment as we’re building these more and more complex systems, there’s this needed distinction between unconscious and conscious information processing, if we’re interested in the morality and ethics of suffering and joy and other conscious states. How do you guys see the science of consciousness here, actually being able to distinguish between unconscious and conscious information processing systems?

Mike: There are a few frames here. One is that, yeah, it does seem like the brain does some processing in consciousness and some processing outside of consciousness. And what’s up with that, this could be sort of an interesting frame to explore in terms of avoiding things like mind crime in the AGI or AI space that if there are certain computations which are painful then don’t do them in a way that would be associated with consciousness. It would be very good to have rules of thumb here for how to do that. One interesting could be in the future we might not just have compilers which optimize for speed of processing or minimization of dependent libraries and so on, but could optimize for the valence of the computation on certain hardware. This of course gets into complex questions about computationalism, how hardware dependent this compiler would be and so on.

I think it’s an interesting and important long-term frame.

Lucas: So just illustrate here I think the ways in which solving or better understanding consciousness will inform AI alignment from present day until super intelligence and beyond.

Mike: I think there’s a lot of confusion about consciousness and a lot of confusion about what kind of thing the value problem is in AI Safety, and there are some novel approaches on the horizon. I was speaking with Stuart Armstrong the last EA global and he had some great things to share about his model fragments paradigm. I think this is the right direction. It’s sort of understanding, yeah, human preferences are insane. Just they’re not a consistent formal system.

Lucas: Yeah, we contain multitudes.

Mike: Yes, yes. So first of all understanding what generates them seems valuable. So there’s this frame in AI safety we call the complexity value thesis. I believe Eliezer came up with it in a post on Lesswrong. It’s this frame where human value is very fragile in that it can be thought of as a small area, perhaps even almost a point in a very high dimensional space, say a thousand dimensions. If we go any distance in any direction from this tiny point in this high dimensional space, then we quickly get to something that we wouldn’t think of as very valuable. And maybe if we leave everything the same and take away freedom, this paints a pretty sobering picture of how difficult AI alignment will be.

I think this is perhaps arguably the source of a lot of worry in the community, that not only do we need to make machines that won’t just immediately kill us, but that will preserve our position in this very, very high dimensional space well enough that we keep the same trajectory and that possibly if we move at all, then we may enter a totally different trajectory, that we in 2019 wouldn’t think of as having any value. So this problem becomes very, very intractable. I would just say that there is an alternative frame. The phrasing that I’m playing around with here it is instead of the complexity of value thesis, the unity of value thesis, it could be that many of the things that we find valuable, eating ice cream, living in a just society, having a wonderful interaction with a loved one, all of these have the same underlying neural substrate and empirically this is what affective neuroscience is finding.

Eating a chocolate bar activates same brain regions as a transcendental religious experience. So maybe there’s some sort of elegant compression that can be made and that actually things aren’t so starkly strict. We’re not sort of this point in a super high dimensional space and if we leave the point, then everything of value is trashed forever, but maybe there’s some sort of convergent process that we can follow that we can essentialize. We can make this list of 100 things that humanity values and maybe they all have in common positive valence, and positive valence can sort of be reverse engineered. And to some people this feels like a very scary dystopic scenario – don’t knock it until you’ve tried it – but at the same time there’s a lot of complexity here.

One core frame that the idea of qualia formalism and valence realism can offer AI safety is that maybe the actual goal is somewhat different than the complexity of value thesis puts forward. Maybe the actual goal is different and in fact easier. I think this could directly inform how we spend our resources on the problem space.

Lucas: Yeah, I was going to say that there exists standing tension between this view of the complexity of all preferences and values that human beings have and then the valence realist view which says that what’s ultimately good or certain experiential or hedonic states. I’m interested and curious about if this valence view is true, whether it’s all just going to turn into hedonium in the end.

Mike: I’m personally a fan of continuity. I think that if we do things right we’ll have plenty of time to get things right and also if we do things wrong then we’ll have plenty of time for things to be wrong. So I’m personally not a fan of big unilateral moves, it’s just getting back to this question of can understanding what is help us, clearly yes.

Andrés: Yeah. I guess one view is we could say preserve optionality and learn what is, and then from there hopefully we’ll be able to better inform oughts and with maintained optionality we’ll be able to choose the right thing. But that will require a cosmic level of coordination.

Mike: Sure. An interesting frame here is whole brain emulation. So whole brain emulation is sort of a frame built around functionalism and it’s a seductive frame I would say. If whole brain emulations wouldn’t necessarily have the same qualia based on hardware considerations as the original humans, there could be some weird lock in effects where if the majority of society turned themselves into p-zombies then it may be hard to go back on that.

Lucas: Yeah. All right. We’re just getting to the end here, I appreciate all of this. You guys have been tremendous and I really enjoyed this. I want to talk about identity in AI alignment. This sort of taxonomy that you’ve developed about open individualism and closed individualism and all of these other things. Would you like to touch on that and talk about implications here in AI alignment as you see it?

Andrés: Yeah. Yeah, for sure. The taxonomy comes from Daniel Kolak, a philosopher and mathematician. It’s a pretty good taxonomy and basically it’s like open individualism, that’s the view that a lot of meditators and mystics and people who take psychedelics often ascribe to, which is that we’re all one consciousness. Another frame is that our true identity is the light of consciousness, so to speak. So it doesn’t matter in what form it manifests, it’s always the same fundamental ground of being. Then you have the common sense view, it’s called closed individualism. You start existing when you’re born, you stop existing when you die. You’re just this segment. Some religions actually extend that into the future or past with reincarnation or maybe with heaven.

It’s the belief in ontological distinction between you and others while at the same time there is ontological continuity from one moment to the next within you. Finally you have this view that’s called empty individualism, which is that you’re just a moment of experience. That’s fairly common among physicists and a lot of people who’ve tried to formalize consciousness, often they converged on empty individualism. I think a lot of theories of ethics and rationality, like the veil of ignorance as a guide or like how do you define rational decision-making as maximizing the expected utility of yourself as an agent, all of those seem to implicitly be based on closed individualism and they’re not necessarily questioning it very much.

On the other hand, if the sense of individual identity of closed individualism doesn’t actually carve nature at its joints as a Buddhist might say, the feeling of continuity of being a separate unique entity is an illusory construction of your phenomenology that casts in a completely different light how to approach rationality itself and even self interest, right? If you start identifying with the light of consciousness rather than your particular instantiation, you will probably care a lot more about what happens to pigs in factory farms because … In so far as they are conscious they are you in a fundamental way. It matters a lot in terms of how to carve out different possible futures, especially when you get into these very tricky situations like, well what if there is mind melding or what if there is the possibility of making perfect copies of yourself?

All of these edge cases are really problematic from the common sense view of identity, but they’re not really a problem from an open individualist or empty individualist point of view. With all of this said, I do personally think there’s probably a way of combining open individualism with valence realism that gives rise to the next step in human rationality where we’re actually trying to really understand what the universe wants, so to speak. But I would say that there is a very tricky aspect here that has to do with game theory. We evolved to believe in close individualism. The fact that it’s evolutionarily adaptive is obviously not an argument for it being fundamentally true, but it does seem to be some kind of an evolutionarily stable point to believe of yourself as who you can affect the most directly in a causal way, if you define your boundary that way.

That basically gives you focus on the actual degrees of freedom that you do have, and if you think of a society of open individualists, everybody’s altruistically maximally contributing to the universal consciousness, and then you have one close individualist who is just selfishly trying to acquire power just for itself, you can imagine that one view would have a tremendous evolutionary advantage in that context. So I’m not one who just naively advocates for open individualism unreflectively. I think we still have to work out to the game theory of it, how to make it evolutionarily stable and also how to make it ethical. Open question, I do think it’s important to think about and if you take consciousness very seriously, especially within physicalism, that usually will cast huge doubts on the common sense view of identity.

It doesn’t seem like a very plausible view if you actually tried to formalize consciousness.

Mike: The game theory aspect is very interesting. You can think of closed individualism as something evolutionists produced that allows an agent to coordinate very closely with its past and future ourselves. Maybe we can say a little bit about why we’re not by default all empty individualists or open individualists. Empty individualism seems to have a problem where if every slice of conscious experience is its own thing, then why should you even coordinate with your past and future self because they’re not the same as you. So that leads to a problem of defection, and open individualism is everything is the same being so to speak than … As Andrés mentioned that allows free riders, if people are defecting, it doesn’t allow altruist punishment or any way to stop the free ride. There’s interesting game theory here and it also just feeds into the question of how we define our identity in the age of AI, the age of cloning, the age of mind uploading.

This gets very, very tricky very quickly depending on one’s theory of identity. They’re opening themselves up to getting hacked in different ways and so different theories of identity allow different forms of hacking.

Andrés: Yeah, which could be sometimes that’s really good and sometimes really bad. I would make the prediction that not necessarily open individualism in its full fledged form but a weaker sense of identity than closed individualism is likely going to be highly adaptive in the future as people basically have the ability to modify their state of consciousness in much more radical ways. People who just identify with narrow sense of identity will just be in their shells, not try to disturb the local attractor too much. That itself is not necessarily very advantageous. If the things on offer are actually really good, both hedonically and intelligence wise.

I do suspect basically people who are somewhat more open to basically identify with consciousness or at least identify with a broader sense of identity, they will be the people who will be doing more substantial progress, pushing the boundary and creating new cooperation and coordination technology.

Lucas: Wow, I love all that. Seeing closed individualism for what it was has had a tremendous impact on my life and this whole question of identity I think is largely confused for a lot of people. At the beginning you said that open individualism says that we are all one consciousness or something like this, right? For me in identity I’d like to move beyond all distinctions of sameness or differenceness. To say like, oh, we’re all one consciousness to me seems to say we’re all one electromagnetism, which is really to say the consciousness is like an independent feature or property of the world that’s just sort of a ground part of the world and when the world produces agents, consciousness is just an empty identityless property that comes along for the ride.

The same way in which it would be nonsense to say, “Oh, I am these specific atoms, I am just the forces of nature that are bounded within my skin and body” That would be nonsense. In the same way in sense of what we were discussing with consciousness there was the binding problem of the person, the discreteness of the person. Where does the person really begin or end? It seems like these different kinds of individualism have, as you said, epistemic and functional use, but they also, in my view, create a ton of epistemic problems, ethical issues, and in terms of the valence theory, if quality is actually something good or bad, then as David Pearce says, it’s really just an epistemological problem that you don’t have access to other brain states in order to see the self intimating nature of what it’s like to be that thing in that moment.

There’s a sense in which i want to reject all identity as arbitrary and I want to do that in an ultimate way, but then in the conventional way, I agree with you guys that there are these functional and epistemic issues that closed individualism seems to remedy somewhat and is why evolution, I guess selected for it, it’s good for gene propagation and being selfish. But once one sees AI as just a new method of instantiating bliss, it doesn’t matter where the bliss is. Bliss is bliss and there’s no such thing as your bliss or anyone else’s bliss. Bliss is like its own independent feature or property and you don’t really begin or end anywhere. You are like an expression of a 13.7 billion year old system that’s playing out.

The universe is just peopleing all of us at the same time, and when you get this view and you see you as just sort of like the super thin slice of the evolution of consciousness and life, for me it’s like why do I really need to propagate my information into the future? Like I really don’t think there’s anything particularly special about the information of anyone really that exists today. We want to preserve all of the good stuff and propagate those in the future, but people who seek a immortality through AI or seek any kind of continuation of what they believe to be their self is, I just see that all as misguided and I see it as wasting potentially better futures by trying to bring Windows 7 into the world of Windows 10.

Mike: This all gets very muddy when we try to merge human level psychological drives and concepts and adaptations with a fundamental physics level description of what is. I don’t have a clear answer. I would say that it would be great to identify with consciousness itself, but at the same time, that’s not necessarily super easy if you’re suffering from depression or anxiety. So I just think that this is going to be an ongoing negotiation within society and just hopefully we can figure out ways in which everyone can move.

Andrés: There’s an article I wrote it, I just called it consciousness versus replicators. That kind of gets to the heart of this issue, but that sounds a little bit like good and evil, but it really isn’t. The true enemy here is replication for replication’s sake. On the other hand, the only way in which we can ultimately benefit consciousness, at least in a plausible, evolutionarily stable way is through replication. We need to find the balance between replication and benefit of consciousness that makes the whole system stable, good for consciousness and resistant against the factors.

Mike: I would like to say that I really enjoy Max Tegmark’s general frame of you leaving this mathematical universe. One re-frame of what we were just talking about in these terms are there are patterns which have to do with identity and have to do with valence and have to do with many other things. The grand goal is to understand what makes a pattern good or bad and optimize our light cone for those sorts of patterns. This may have some counter intuitive things, maybe closed individualism is actually a very adaptive thing, in the long term it builds robust societies. Could be that that’s not true but I just think that taking the mathematical frame and the long term frame is a very generative approach.

Lucas: Absolutely. Great. I just want to finish up here on two fun things. It seems like good and bad are real in your view. Do we live in heaven or hell?

Mike: Lot of quips that come to mind here. Hell is other people, or nothing is good or bad but thinking makes it so. My pet theory I should say is that we live in something that is perhaps close to heaven as is physically possible. The best of all possible worlds.

Lucas: I don’t always feel that way but why do you think that?

Mike: This gets through the weeds of theories about consciousness. It’s this idea that we tend to think of consciousness on the human scale. Is the human condition good or bad, is the balance of human experience on the good end, the heavenly end or the hellish end. If we do have an objective theory of consciousness, we should be able to point it at things that are not human and even things that are not biological. It may seem like a type error to do this but we should be able to point it at stars and black holes and quantum fuzz. My pet theory, which is totally not validated, but it is falsifiable, and this gets into Bostrom’s simulation hypothesis, it could be that if we tally up the good valence and the bad valence in the universe, that first of all, the human stuff might just be a rounding error.

Most of the value, in this value the positive and negative valence is found elsewhere, not in humanity. And second of all, I have this list in the last appendix of Principia Qualia as well, where could massive amounts of consciousness be hiding in the cosmological sense. I’m very suspicious that the big bang starts with a very symmetrical state, I’ll just leave it there. In a utilitarian sense, if you want to get a sense of whether we live in a place closer to heaven or hell we should actually get a good theory of consciousness and we should point to things that are not humans and cosmological scale events or objects would be very interesting to point it at. This will give a much better clear answer as to whether we live in somewhere closer to heaven or hell than human intuition.

Lucas: All right, great. You guys have been super generous with your time and I’ve really enjoyed this and learned a lot. Is there anything else you guys would like to wrap up on?

Mike: Just I would like to say, yeah, thank you so much for the interview and reaching out and making this happen. It’s been really fun on our side too.

Andrés: Yeah, I think wonderful questions and it’s very rare for an interviewer to have non conventional views of identity to begin with, so it was really fun, really appreciate it.

Lucas: Would you guys like to go ahead and plug anything? What’s the best place to follow you guys, Twitter, Facebook, blogs, website?

Mike: Our website is qualiaresearchinstitute.org and we’re working on getting a PayPal donate button out but in the meantime you can send us some crypto. We’re building out the organization and if you want to read our stuff a lot of it is linked from the website and you can also read my stuff at my blog, opentheory.net and Andrés’ is @qualiacomputing.com.

Lucas: If you enjoyed this podcast, please subscribe, give it a like or share it on your preferred social media platform. We’ll be back again soon with another episode in the AI Alignment series.


Featured image credit: Alex Grey