That Time Daniel Dennett Took 200 Micrograms of LSD (In Another Timeline)

[Epistemic status: fiction]

Andrew Zuckerman messaged me:

Daniel Dennett admits that he has never used psychedelics! What percentage of functionalists are psychedelic-naïve? What percentage of qualia formalists are psychedelic-naïve? In this 2019 quote, he talks about his drug experience and also alludes to meme hazards (though he may not use that term!):

Yes, you put it well. It’s risky to subject your brain and body to unusual substances and stimuli, but any new challenge may prove very enlightening–and possibly therapeutic. There is only a difference in degree between being bumped from depression by a gorgeous summer day and being cured of depression by ingesting a drug of one sort or another. I expect we’ll learn a great deal in the near future about the modulating power of psychedelics. I also expect that we’ll have some scientific martyrs along the way–people who bravely but rashly do things to themselves that disable their minds in very unfortunate ways. I know of a few such cases, and these have made me quite cautious about self-experimentation, since I’m quite content with the mind I have–though I wish I were a better mathematician. Aside from alcohol, caffeine, nicotine and cannabis (which has little effect on me, so I don’t bother with it), I have avoided the mind-changing options. No LSD, no psilocybin or mescaline, though I’ve often been offered them, and none of the “hard” drugs.


As a philosopher, I have always accepted the possibility that the Athenians were right: Socrates was quite capable of corrupting the minds of those with whom he had dialogue. I don’t think he did any clear lasting harm, but it is certainly possible for a philosopher to seriously confuse an interlocutor or reader—to the point of mental illness or suicide, or other destructive behavior. Ideas can be just as dangerous as drugs.


Dennett Explained by Brendan Fleig-Goldstein and Daniel A. Friedman (2019)

It would be quite fascinating to know what Dan would say about lived psychedelic states. With that in mind, here is an essay prompt originally conceived for GPT-3 to satisfy our curiosity:

And after seeing some surprising empirical results with his heterophenomenological methods when examining the experience of people on psychedelics, Daniel Dennett decided to experience it for himself by taking 200 micrograms of LSD. The first thing he said to himself as he felt the first indications of the come-up was…


Maggie and Anders

Maggie Wassinge and Anders Amelin (QRI Sweden volunteer coordinators and content contributors – see letters I & II, letters III, IV, V, & VI, and letters VII, VIII, & IX) decided to give it a go first. We shall then compare it with what GPT-3 can come up with at a later point.

In a way, what you are about to read can be thought of as Anders & Maggie’s simulation of a hyper-intelligent GPT-3-like system’s simulation of a hypothetical Daniel Dennett on LSD. So many layers, I know. But the results speak for themselves:

Dan says: Correlation is all that is needed. So what states a mind uses for modeling conditions of importance to it, is fundamentally arbitrary. Like with language. Words represent things to humans but there are many languages, and words do not resemble what they represent. They only correlate with what they represent, and the correlation needs to be learned. To do science about the mind, one does not need to attach any special weight to any conscious state of mind over any other. One person’s hope may be another person’s despair. These “qualia” are like internally projected hypo-languages which are different all over the map, if there were one. In fact there cannot be an interpersonal map of what things feel like personally. Only naïve people could hope for a fundamental linguistics of consciousness, as if these states were to constitute a universal language of some ground truth of the mind. Silly. Romantic. It’s all arbitrary. For the record, I never said subjective experiential states do not exist. People misrepresent me on that. What I am saying is that it will never be possible to scientifically map what the state that a word such as, for instance, “green”, translates to feels like by the brain of a given individual. My green could be your red.cielab-lch-color-tolerancing-methods

Just drop the whole idea of trying to map the state-space of qualia. That is my position. Or at least I know it is, logically. Right now I begin to notice how everything intensifies and becomes somehow more salient. More revealingly “real”. As I reflect on the notion of how “states” correlate, a humorous episode from my undergraduate student life so long ago, is brought to the surface. At Wesleyan it was, where I was taking a course in Art Appreciation. The lecturer was showing a slide of a still life. A bowl of fruit it was, conspicuously over-ripe. Pointing at one of the fruits, saying “Can anyone tell me what state this peach is in?” There was silence for about three seconds, then one student exclaimed: “Georgia”. Everyone laughed joyfully. Except me. I never quite liked puns. Too plebeian. Sense of humor is arbitrary. I believe that episode helped convince me that the mind is not mysterious after all. It is just a form of evolved spaghetti code finding arbitrary solutions to common problems. Much like adaptations of biochemistry in various species of life. The basic building blocks remain fixed as an operative system if you will, but what is constructed with it is arbitrary and only shaped by fitness proxies. Which are, again, nothing but correlations. I realized then that I’d be able to explain consciousness within a materialist paradigm without any mention of spirituality or new realms of physics. All talk of such is nonsense.Daniel_dennett_Oct2008

I have to say, however, that a remarkable transformation inside my mind is taking place as a result of this drug. I notice the way I now find puns quite funny. Fascinating. I also reflect on the fact that I find it fascinating that I find puns funny. It’s as if… I hesitate to think it even to myself, but there seems to be some extraordinarily strong illusion that “funny” and “fascinating” are in fact those very qualia states which… which cannot possibly be arbitrary. Although the reality of it has got to be that when I feel funniness or fascination, those are brain activity patterns unique to myself, not possible for me to relate to any other creature in the universe experiencing them the same way, or at least not to any non-human species. Not a single one would feel the same, I’m sure. Consider a raven, for example. It’s a bird that behaves socially intricately, makes plans for the next day, can grasp how tools are used, and excels at many other mental tasks even sometimes surpassing a chimpanzee. Yet a raven has a last common ancestor with humans more than three hundred million years ago. The separate genetic happenstances of evolution since then, coupled with the miniaturization pressure due to weight limitations on a flying creature, means that if I were to dissect and anatomically compare the brain of a raven and a human, I’d be at a total loss. Does the bird even have a cerebral cortex?03-brai-diagram

An out of character thing is happening to me. I begin to feel as if it were in fact likely that a raven does sense conscious states of “funny” and “fascinating”. I still have functioning logic that tells me it must be impossible. Certainly, it’s an intelligent creature. A raven is conscious, probably. Maybe the drug makes me exaggerate even that, but it ought to have a high likelihood of being the case. But the states of experience in a raven’s mind must be totally alien if it were possible to compare them side by side with those of a human, which of course it is not. The bird might as well come from another planet.Head_of_Raven

The psychedelic drug is having an emotional effect on me. It does not twist my logic, though. This makes for internal conflict. Oppositional suggestions spontaneously present themselves. Could there be at least some qualia properties which are universal? Or is every aspect arbitrary? If the states of the subjective are not epiphenomenal, there would be evolutionary selection pressures shaping them. Logically there should be differences in computational efficiency when the information encoded in qualia feeds back into actions carried out by the body that the mind controls. Or is it epiphenomenal after all? Well, there’s the hard problem. No use pondering that. It’s a drug effect. It’ll wear off. Funny thing though, I feel very, very happy. I’m wondering about valence. It now appeals strongly to take the cognitive leap that at least the positive/negative “axis” of experience may in fact be universal. A modifier of all conscious states, a kind of transform function. Even alien states could then have a “good or bad” quality to them. Not directly related to the cognitive power of intelligences, but used as an efficient guidance for agency by them all, from the humblest mite to the wisest philosopher. Nah. Romanticizing. Anthropomorphizing.

36766208_10160731731785637_6606215010454601728_oFurther into this “trip” now. Enjoying the ride. It’s not going to change my psyche permanently, so why not relax and let go? What if conscious mind states really do have different computational efficiency for various purposes? That would mean there is “ground truth” to be found about consciousness. But how does nature enable the process for “hitting” the efficient states? If that has been convergently perfected by evolution, conscious experience may be more universal than I used to take for granted. Without there being anything supernatural about it. Suppose the possibility space of all conscious states is very large, so that within it there is an ideally suited state for any mental task. No divine providence or intelligent design, just a law of large numbers.

The problem then is only a search algorithmic one, really. Suppose “fright” is a state ideally suited for avoiding danger. At least now, under the influence, fright strikes me as rather better for the purpose than attraction. Come to think of it, Toxoplasma Gondii has the ability to replace fright with attraction in mice with respect to cats. It works the same way in other mammals, too. Are things then not so arbitrarily organized in brains? Well, those are such basic states we’d share them with rodents presumably. Still can’t tell if fright feels like fear in a raven or octopus. But can it feel like attraction? Hmmm, these are just mind wanderings I go through while I wait for this drug to wear off. What’s the harm in it?

Suppose there is a most computationally efficient conscious state for a given mental task. I’d call that state the ground state of conscious intelligence with respect to that task. I’m thinking of it like mental physical chemistry. In that framework, a psychedelic drug would bring a mind to excited states. Those are states the mind has not practiced using for tasks it has learned to do before. The excited states can then be perceived as useless, for they perform worse at tasks one has previously become competent at while sober. Psychedelic states are excited with respect to previous mental tasks, but they would potentially be ground states for new tasks! It’s probably not initially evident exactly what those tasks are, but the great potential to in fact become more mentally able would be apparent to those who use psychedelics. Right now this stands out to me as absolutely crisp, clear and evident. And the sheer realness of the realization is earth-shaking. Too bad my career could not be improved by any new mental abilities.Touched_by_His_Noodly_Appendage_HD

Oh Spaghetti Monster, I’m really high now. I feel like the sober me is just so dull. Illusion, of course, but a wonderful one I’ll have to admit. My mind is taking off from the heavy drudgery of Earth and reaching into the heavens on the wings of Odin’s ravens, eternally open to new insights about life, the universe and everything. Seeking forever the question to the answer. I myself am the answer. Forty-two. I was born in nineteen forty two. The darkest year in human history. The year when Adolf Hitler looked unstoppable at destroying all human value in the entire world. Then I came into existence, and things started to improve.

It just struck me that a bird is a good example of embodied intelligence. Sensory input to the brain can produce lasting changes in the neural connectivity and so on, resulting in a saved mental map of that which precipitated the sensory input. Now, a bird has the advantage of flight. It can view things from the ground and from successively higher altitudes and remember the appearance of things on all these different scales. Plus it can move sideways large distances and find systematic changes over scales of horizontal navigation. Entire continents can be included in a bird’s area of potential interest. Continents and seasons. I’m curious if engineers will someday be able to copy the ability of birds into a flying robot. Maximizing computational efficiency. Human-level artificial intelligence I’m quite doubtful of, but maybe bird brains are within reach, though quite a challenge, too.

This GPT-3 system by OpenAI is pretty good for throwing up somewhat plausible suggestions for what someone might say in certain situations. Impressive for a purely lexical information processing system. It can be trained on pretty much any language. I wonder if it could become useful for formalizing those qualia ground states? The system itself is not an intelligence in the agency sense but it is a good predictor of states. Suppose it can model the way the mind of the bird cycles through all those mental maps the bird brain has in memory. Where the zooming in and out on different scales brings out different visual patterns. If aspects of patterns from one zoom level is combined with aspect from another zoom level, the result can be a smart conclusion about where and when to set off in what direction and with what aim. Then there can be combinations also with horizontally displaced maps and time-displaced maps. Essentially, to a computer scientist we are talking massively parallel processing through cycles of information compression and expansion with successive approximation combinations of pattern pieces from the various levels in rapid repetition until something leads to an action which becomes rewarded via a utility function maximization.


Axioms of Integrated Information Theory (IIT)

Thank goodness I’m keeping all this drugged handwaving to myself and not sharing it in the form of any trip report. I have a reputation for being down to Earth, and I wouldn’t want to spoil it. Flying with ravens, dear me. Privately it is quite fun right now, though. That cycling of mental maps, could it be compatible with the Integrated Information Theory? I don’t think Tononi’s people have gone into how an intelligent system would search qualia state-space and how it would find the task-specific ground states via successive approximations. Rapidly iterated cycling would bring in a dynamic aspect they haven’t gotten to, perhaps. I realize I haven’t read the latest from them. Was always a bit skeptical of the unwieldy mathematics they use. Back of the envelope here… if you replace the clunky “integration” with resonance, maybe there’s a continuum of amplitudes of consciousness intensity? Possibly with a threshold corresponding to IIT’s nonconscious feed-forward causation chains. The only thing straight from physics which would allow this, as far as I can tell from the basic mathematics of it, would be wave interference dynamics. If so, what property might valence correspond to? Indeed, be mappable to? For conscious minds, experiential valence is the closest one gets to updating on a utility function. Waves can interfere constructively and destructively. That gives us frequency-variable amplitude combinations, likely isomorphic with the experienced phenomenology and intensity of conscious states. Such as the enormous “realness” and “fantastic truth” I am now immersed in. Not sure if it’s even “I”. There is ego dissolution. It’s more like a free-floating cosmic revelation. Spectacular must be the mental task for which this state is the ground state!

Wave pattern variability is clearly not a bottleneck. Plotting graphs of frequencies and amplitudes for even simple interference patterns shows there’s a near-infinite space of distinct potential patterns to pick from. The operative system, that is evolution and development of nervous systems, must have been slow going to optimize by evolution via genetic selection early on in the history of life, but then it could go faster and faster. Let me see, humans acquired a huge redundancy of neocortex of the same type as animals use for avigation in spacetime locations. Hmmm…, that which the birds are so good at. Wonder if the same functionality in ravens also got increased in volume beyond what is needed for navigation? Opening up the possibility of using the brain to also “navigate” in social relational space or tool function space. Literally, these are “spaces” in the brain’s mental models.2000px-Migrationroutes.svg

Natural selection of genetics cannot have found the ground states for all the multiple tasks a human with our general intelligence is able to take on. Extra brain tissue is one thing it could produce, but the way that tissue gets efficiently used must be trained during life. Since the computational efficiency of the human brain is assessed to be near the theoretical maximum for the raw processing power it has available, inefficient information-encoding states really aren’t very likely to make up any major portion of our mental activity. Now, that’s a really strong constraint on mechanisms of consciousness there. If you don’t believe it was all magically designed by God, you’d have to find a plausible parsimonious mechanism for how the optimization takes place.

If valence is in the system as a basic property, then what can it be if it’s not amplitude? For things to work optimally, valence should in fact be orthogonal to amplitude. Let me see… What has a natural tendency to persist in evolving systems of wave interference? Playing around with some programs on my computer now… well, appears it’s consonance which continues and dissonance which dissipates. And noise which neutralizes. Hey, that’s even simple to remember: consonance continues, dissonance dissipates, noise neutralizes. Goodness, I feel like a hippie. Beads and Roman sandals won’t be seen. In Muskogee, Oklahoma, USA. Soon I’ll become convinced love’s got some cosmic ground state function, and that the multiverse is mind-like. Maybe it’s all in the vibes, actually. Spaghetti Monster, how silly that sounds. And at the same time, how true!


Artist: Matthew Smith

I’m now considering the brain to produce self-organizing ground state qualia selection via activity wave interference with dissonance gradient descent and consonance gradient ascent with ongoing information compression-expansion cycling and normalization via buildup of system fatigue. Wonder if it’s just me tripping, or if someone else might seriously be thinking along these lines. If so, what could make a catchy name for their model?

Maybe “Resonant State Selection Theory”? I only wish this could be true, for then it would be possible to unify empty individualism with open individualism in a framework of full empathic transparency. The major ground states for human intelligence could presumably be mapped pretty well with an impressive statistical analyzer like GPT-3. Mapping the universal ground truth of conscious intelligence, what a vision!

But, alas, the acid is beginning to wear off. Back to the good old opaque arbitrariness I’ve built my career on. No turning back now. I think it’s time for a cup of tea, and maybe a cracker to go with that.



These are the answers to the most Frequently Asked Questions about the Qualia Research Institute. (See also: the glossary).

(Organizational) Questions About the Qualia Research Institute

  • What type of organization is QRI?

    • QRI is a nonprofit research group studying consciousness based in San Francisco, California. We are a registered 501(c)(3) organization.

  • What is the relationship between QRI, Qualia Computing, and Opentheory?

    • Qualia Computing and Opentheory are the personal blogs of QRI co-founders Andrés Gómez Emilsson and Michael Johnson, respectively. While QRI was in its early stages, all original QRI research was initially published on these two platforms. However, from August 2020 onward, this is shifting to a unified pipeline centered on QRI’s website.

  • Is QRI affiliated with an academic institution or university?

    • Although QRI does collaborate regularly with university researchers and laboratories, we are an independent research organization. Put simply, QRI is independent because we didn’t believe we could build the organization we wanted and needed to build within the very real constraints of academia. These constraints include institutional pressure to work on conventional projects, to optimize for publication metrics, and to clear various byzantine bureaucratic hurdles. It also includes professional and social pressure to maintain continuity with old research paradigms, to do research within an academic silo, and to pretend to be personally ignorant of altered states of consciousness. It’s not that good research cannot happen under these conditions, but we believe good consciousness research happens despite the conditions in academia, not because of them, and the best use of resources is to build something better outside of them.

  • How does QRI align with the values of EA?

    • Effective Altruism (EA) is a movement that uses evidence and reason to figure out how to do the most good. QRI believes this aesthetic is necessary and important for creating a good future. We also believe that if we want to do the most good, foundational research on the nature of the good is of critical importance. Two frames we offer are Qualia Formalism and Sentientism. Qualia Formalism is the claim that experience has a precise mathematical description, that a formal account of experience should be the goal of consciousness research. Sentientism is the claim that value and disvalue are entirely expressed in the nature and quality of conscious experiences. We believe EA is enriched by both Qualia Formalism and Sentientism.

  • What would QRI do with $10 billion?

    • Currently, QRI is a geographically distributed organization with access to commercial-grade neuroimaging equipment. The first thing we’d do with $10 billion is set up a physical headquarters for QRI and buy professional-grade neuroimaging devices (fMRI, MEG, PET, etc.) and neurostimulation equipment. We’d also hire teams of full-time physicists, mathematicians, electrical engineers, computer scientists, neuroscientists, chemists, philosophers, and artists. We’ve accomplished a great deal on a shoestring budget, but it would be hard to overestimate how significant being able to build deep technical teams and related infrastructure around core research threads would be for us (and, we believe, for the growing field of consciousness research). Scaling is always a process and we estimate our ‘room for funding’ over the next year is roughly ~$10 million. However, if we had sufficiently deep long-term commitments, we believe we could successfully scale both our organization and research paradigm into a first-principles approach for decisively diagnosing and curing most forms of mental illness. We would continue to run studies and experiments, collect interesting data about exotic and altered states of consciousness, pioneer new technologies that help eliminate involuntary suffering, and develop novel ways to enable conscious beings to safely explore the state-space of consciousness.

Questions About Our Research Approach

  • What differentiates QRI from other research groups studying consciousness?

    • The first major difference is that QRI breaks down “solving consciousness” into discrete subtasks; we’re clear about what we’re trying to do, which ontologies are relevant for this task, and what a proper solution will look like. This may sound like a small thing, but an enormous amount of energy is wasted in philosophy by not being clear about these things. This lets us “actually get to work.”

    • Second, our focus on valence is rare in the field of consciousness studies. A core bottleneck in understanding consciousness is determining what its ‘natural kinds’ are: terms which carve reality at the joints. We believe emotional valence (the pleasantness/unpleasantness of an experience) is one such natural kind, and this gives us a huge amount of information about phenomenology. It also offers a clean bridge for interfacing with (and improving upon) the best neuroscience.

    • Third, QRI takes exotic states of consciousness extremely seriously whereas most research groups do not. An analogy we make here is that ignoring exotic states of consciousness is similar to people before the scientific enlightenment thinking that they can understand the nature of energy, matter, and the physical world just by studying it at room temperature while completely ignoring extreme states such as what’s happening in the sun, black holes, plasma, or superfluid helium. QRI considers exotic states of consciousness as extremely important datapoints for reverse-engineering the underlying formalism for consciousness.

    • Lastly, we have a focus on precise, empirically testable predictions, which is rare in philosophy of mind. Any good theory of consciousness should also contribute to advancements in neuroscience. Likewise, any good theory of neuroscience should contribute to novel, bold, falsifiable predictions, and blueprints for useful things, such as new forms of therapy. Having such a full-stack approach to consciousness which does each of those two things is thus an important marker that “something interesting is going on here” and is simply very useful for testing and improving theory.

  • What methodologies are you using? How do you actually do research? 

    • QRI has three core areas of research: philosophy, neuroscience, and neurotechnology 

      • Philosophy: Our philosophy research is grounded in the eight problems of consciousness. This divide-and-conquer approach lets us explore each subproblem independently, while being confident that when all piecemeal solutions are added back together, they will constitute a full solution to consciousness.

      • Neuroscience: We’ve done original synthesis work on combining several cutting-edge theories of neuroscience (the free energy principle, the entropic brain, and connectome-specific harmonic waves) into a unified theory of Bayesian emotional updating; we’ve also built the world’s first first-principles method for quantifying emotional valence from fMRI. More generally, we focus on collecting high valence neuroimaging datasets and developing algorithms to analyze, quantify, and visualize them. We also do extensive psychophysics research, focusing on both the fine-grained cognitive-emotional effects of altered states, and how different types of sounds, pictures, body vibrations, and forms of stimulation correspond with low and high valence states of consciousness.

      • Neurotechnology: We engage in both experimentation-driven exploration, tracking the phenomenological effects of various interventions, as well as theory-driven development. In particular, we’re prototyping a line of neurofeedback tools to help treat mental health disorders.

  • What does QRI hope to do over the next 5 years? Next 20 years?

    • Over the next five years, we intend to further our neurotechnology to the point that we can treat PTSD (post-traumatic stress disorder), especially treatment-resistant PTSD. We intend to empirically verify or falsify the symmetry theory of valence. If it is falsified, we will search for a new theory that ties together all of the empirical evidence we have discovered. We aim to create an Effective Altruist cause area regarding the reduction of intense suffering as well as the study of very high valence states of consciousness.

    • Over the next 20 years, we intend to become a world-class research center where we can put the discipline of “paradise engineering” (as described by philosopher David Pearce) on firm academic grounds.

Questions About Our Mission

  • How can understanding the science of consciousness make the world a better place?

    • Understanding consciousness would improve the world in a tremendous number of ways. One obvious outcome would be the ability to better predict what types of beings are conscious—from locked-in patients to animals to pre-linguistic humans—and what their experiences might be like.

    • We also think it’s useful to break down the benefits of understanding consciousness in three ways: reducing the amount of extreme suffering in the world, increasing the baseline well-being of conscious beings, and achieving new heights for what conscious states are possible to experience.

    • Without a good theory of valence, many neurological disorders will remain completely intractable. Disorders such as fibromyalgia, complex regional pain syndrome (CRPS), migraines, and cluster headaches are all currently medical puzzles and yet have incredibly negative effects on people’s livelihoods. We think that a mathematical theory of valence will explain why these things feel so bad and what the shortest path for getting rid of them looks like. Besides valence-related disorders, nearly all mental health disorders, from clinical depression and PTSD to schizophrenia and anxiety disorders, will become better understood as we discover the structure of conscious experience.

    • We also believe that many (though not all) of the zero-sum games people play are the products of inner states of dissatisfaction and suffering. Broadly speaking, people who have a surplus of cognitive and emotional energy tend to play more positive sum games, are more interested in cooperation, and are very motivated to do so. We think that studying states such as those induced by MDMA that combine both high valence and a prosocial behavior mindset can radically alter the game theoretical landscape of the world for the better.

  • What is the end goal of QRI? What does QRI’s perfect world look like?

    • In QRI’s perfect future:

      • There is no involuntary suffering and all sentient beings are animated by gradients of bliss,

      • Research on qualia and consciousness is done at a very large scale for the purpose of mapping out the state-space of consciousness and understanding its computational and intrinsic properties (we think that we’ve barely scratched the surface of knowledge about consciousness),

      • We have figured out the game-theoretical subtleties in order to make that world dynamic yet stable: radically positive, without just making it fully homogeneous and stuck in a local maxima.

Questions About Getting Involved

  • How can I follow QRI’s work?

    • You can start by signing up for our newsletter! This is by far our most important communication channel. We also have a Facebook page, Twitter account, and Linkedin page. Lastly, we share some exclusive tidbits of ideas and thoughts with our supporters on Patreon.

  • How can I get involved with QRI?

    • The best ways to help QRI are to:

      • Donate to help support our work.

      • Read and engage with our research. We love critical responses to our ideas and encourage you to reach out if you have an interesting thought!

      • Spread the word to friends, potential donors, and people that you think would make great collaborators with QRI.

      • Check out our volunteer page to find more detailed ways that you can contribute to our mission, from independent research projects to QRI content creation.

Questions About Consciousness

  • What assumptions about consciousness does QRI have? What theory of consciousness does QRI support?

    • The most important assumption that QRI is committed to is Qualia Formalism, the hypothesis that the internal structure of our subjective experience can be represented precisely by mathematics. We are also Valence Realists: we believe valence (how good or bad an experience feels) is a real and well-defined property of conscious states. Besides these positions, we are fairly agnostic and everything else is an educated guess useful for pragmatic purposes.

  • What does QRI think of functionalism?

    • QRI thinks that functionalism takes many high-quality insights about how systems work and combines them in such a way that both creates confusion and denies the possibility of progress. In its raw, unvarnished form, functionalism is simply skepticism about the possibility of Qualia Formalism. It is simply a statement that “there is nothing here to be formalized; consciousness is like élan vital, confusion to be explained away.” It’s not actually a theory of consciousness; it’s an anti-theory. This is problematic in at least two ways:

      • 1. By assuming consciousness has formal structure, we’re able to make novel predictions that functionalism cannot (see e.g. QRI’s Symmetry Theory of Valence, and Quantifying Bliss). A few hundred years ago, there were many people who doubted that electromagnetism had a unified, elegant, formal structure, and this was a reasonable position at the time. However, in the age of the iPhone, skepticism that electricity is a “real thing” that can be formalized is no longer reasonable. Likewise, everything interesting and useful QRI builds using the foundation of Qualia Formalism stretches functionalism’s credibility thinner and thinner.

      • 2. Insofar as functionalism is skeptical about the formal existence of consciousness, it’s skeptical about the formal existence of suffering and all sentience-based morality. In other words, functionalism is a deeply amoral theory, which if taken seriously dissolves all sentience-based ethical claims. This is due to there being an infinite number of functional interpretations of a system: there’s no ground-truth fact of the matter about what algorithm a physical system is performing, about what information-processing it’s doing. And if there’s no ground-truth about which computations or functions are present, but consciousness arises from these computations or functions, then there’s no ground-truth about consciousness, or things associated with consciousness, like suffering. This is a strange and subtle point, but it’s very important. This point alone is not sufficient to reject functionalism: if the universe is amoral, we shouldn’t hold a false theory of consciousness in order to try to force reality into some ethical framework. But in debates about consciousness, functionalists should be up-front that functionalism and radical moral anti-realism is a package deal, that inherent in functionalism is the counter-intuitive claim that just as we can reinterpret which functions a physical system is instantiating, we can reinterpret what qualia it’s experiencing and whether it’s suffering.

    • For an extended argument, see Against Functionalism.

  • What does QRI think of panpsychism?

    • At QRI, we hold a position that is close to dual-aspect monism or neutral monism, which states that the universe is composed of one kind of thing that is neutral, and that both the mental and physical are two features of this same substance. One of the motivating factors for holding this view is that if there is deep structure in the physical, then there should be a corresponding deep structure to phenomenal experience. And we can tie this together with physicalism in the sense that the laws of physics ultimately describe fields of qualia. While there are some minor disagreements between dual-aspect monism and panpsychism, we believe that our position mostly fits well with a panpsychist view—that phenomenal properties are a fundamental feature of the world and aren’t spontaneously created only when a certain computation is being performed.

    • However, even with this view, there still are very important questions, such as: what makes a unified conscious experience? Where does one experience end and another begin? Without considering these problems in the light of Qualia Formalism, it is easy to tie animism into panpsychism and believe that inanimate objects like rocks, sculptures, and pieces of wood have spirits or complex subjective experiences. At QRI, we disagree with this and think that these types of objects might have extremely small pockets of unified conscious experience, but will mostly be masses of micro-qualia that are not phenomenally bound into some larger experience.

  • What does QRI think of IIT (Integrated Information Theory)?

    • QRI is very grateful for IIT because it is the first mainstream theory of consciousness that satisfies a Qualia Formalist account of experience. IIT says (and introduced the idea!) that for every conscious experience, there is a corresponding mathematical object such that the mathematical features of that object are isomorphic to the properties of the experience. QRI believes that without this idea, we cannot solve consciousness in a meaningful way, and we consider the work of Giulio Tononi to be one of our core research lineages. That said, we are not in complete agreement with the specific mathematical and ontological choices of IIT, and we think it may be trying to ‘have its cake and eat it too’ with regard to functionalism vs physicalism. For more, see Sections III-V of Principia Qualia.

    • We make no claim that some future version of IIT, particularly something more directly compatible with physics, couldn’t cleanly address our objections, and see a lot of plausible directions and promise in this space.

  • What does QRI think of the free energy principle and predictive coding?

    • On our research lineages page, we list the work of Karl Friston as one of QRI’s core research lineages. We consider the free energy principle (FEP), as well as related research such as predictive coding, active inference, the Bayesian brain, and cybernetic regulation, as an incredibly elegant and predictive story of how brains work. Friston’s idea also forms a key part of the foundation for QRI’s theory of brain self-organization and emotional updating, Neural Annealing.

    • However, we don’t think that the free energy principle is itself a theory of consciousness, as it suffers from many of the shortcomings of functionalism: we can tell the story about how the brain minimizes free energy, but we don’t have a way of pointing at the brain and saying *there* is the free energy! The FEP is an amazing logical model, but it’s not directly connected to any physical mechanism. It is a story that “this sort of abstract thing is going on in the brain” without a clear method of mapping this abstract story to reality.

    • Friston has supported this functionalist interpretation of his work, noting that he sees consciousness as a process of inference, not a thing. That said, we are very interested in his work on calculating the information geometry of Markov blankets, as this could provide a tacit foundation for a formalist account of qualia under the FEP. Regardless of this, though, we believe Friston’s work will play a significant role in a future science of mind.

  • What does QRI think of global workspace theory?

    • The global workspace theory (GWT) is a cluster of empirical observations that seem to be very important for understanding what systems in the brain contribute to a reportable experience at a given point in time. The global workspace theory is a very important clue for answering questions of what philosophers call Access Consciousness, or the aspects of our experience on which we can report.

    • However, QRI does not consider the global workspace theory to be a full theory of consciousness. Parts of the brain that are not immediately contributing to the global workspace may be composed of micro qualia, or tiny clusters of experience. They’re obviously impossible to report on, but they are still relevant to the study of consciousness. In other words, just because a part of your brain wasn’t included in the instantaneous global workspace, doesn’t mean that it can’t suffer or it can’t experience happiness. We value global workspace research because questions of Access Consciousness are still very critical for a full theory of consciousness.

  • What does QRI think of higher-order theories of consciousness?

    • QRI is generally opposed to theories of consciousness that equate consciousness with higher order reflective thought and cognition. Some of the most intense conscious experiences are pre-reflective or unreflective such as blind panic, religious ecstasy, experiences of 5-MeO-DMT, and cluster headaches. In these examples, there is not much reflectivity nor cognition going on, yet they are intensely conscious. Therefore, we largely reject any attempt to define consciousness with a higher-order theory.

  • What is the relationship between evolution and consciousness?

    • The relationship between evolution and consciousness is very intricate and subtle. An eliminativist approach arrives at the simple idea that information processing of a certain type is evolutionarily advantageous, and perhaps we can call this consciousness. However, with a Qualia Formalist approach, it seems instead that the very properties of the mathematical object isomorphic to consciousness can play key roles (either causal or in terms of information processing) that make it advantageous for organisms to recruit consciousness.

    • If you don’t realize that consciousness maps onto a mathematical object with properties, you may think that you understand why consciousness was recruited by natural selection, but your understanding of the topic would be incomplete. In other words, to have a full understanding of why evolution recruited consciousness, you need to understand what advantages the mathematical object has. One very important feature of consciousness is its capacity for binding. For example, the unitary nature of experience—the fact that we can experience a lot of qualia simultaneously—may be a key feature of consciousness that accelerates the process of finding solutions to constraint satisfaction problems. In turn, evolution would hence have a reason to recruit states of consciousness for computation. So rather than thinking of consciousness as identical with the computation that is going on in the brain, we can think of it as a resource with unique computational benefits that are powerful and dynamic enough to make organisms that use it more adaptable to their environments.

  • Does QRI think that animals are conscious?

    • QRI thinks there is a very high probability that every animal with a nervous system is conscious. We are agnostic about unified consciousness in insects, but we consider it very likely. We believe research on animal consciousness has relevance when it comes to treating animals ethically. Additionally, we do think that the ethical importance of consciousness has more to do with the pleasure-pain axis (valence), rather than cognitive ability. In that sense, the suffering of non-human animals may be just as morally relevant, if not more relevant than humans. The cortex seems to play a largely inhibitory role for emotions, such that the larger the cortex is, the better we’re able to manage and suppress our emotions. Consequently, animals whose cortices are less developed than ours may experience pleasure and pain in a more intense and uncontrollable way, like a pre-linguistic toddler.

  • Does QRI think that plants are conscious?

    • We think it’s very unlikely that plants are conscious. The main reason is that they lack an evolutionary reason to recruit consciousness. Large-scale phenomenally bound experience may be very energetically expensive, and plants don’t have much energy to spare. Additionally, plants have thick cellulose walls that separate individual cells, making it very unlikely that plants can solve the binding problem and therefore create unified moments of experience.

  • Why do some people seek out pain?

    • This is a very multifaceted question. As a whole, we postulate that in the vast majority of cases, when somebody may be nominally pursuing pain or suffering, they’re actually trying to reduce internal dissonance in pursuit of consonance or they’re failing to predict how pain will actually feel. For example, when a person hears very harsh music, or enjoys extremely spicy food, this can be explained in terms of either masking other unpleasant sensations or raising the energy parameter of experience, the latter of which can lead to neural annealing: a very pleasant experience that manifests as consonance in the moment.

  • I sometimes like being sad. Is QRI trying to take that away from me?

    • Before we try to ‘fix’ something, it’s important to understand what it’s trying to do for us. Sometimes suffering leads to growth; sometimes creating valuable things involves suffering. Sometimes, ‘being sad’ feels strangely good. Insofar as suffering is doing good things for us, or for the world, QRI advocates a light touch (see Chesterton’s fence). However, we also suggest two things:

      • 1. Most kinds of melancholic or mixed states of sadness usually are pursued for reasons that cash out as some sort of pleasure. Bittersweet experiences are far more preferable than intense agony or deep depression. If you enjoy sadness, it’s probably because there’s an aspect of your experience that is enjoyable. If it were possible to remove the sad part of your experience while maintaining the enjoyable part of it, you might be surprised to find that you prefer this modified experience more than the original one.

      • 2. There are kinds of sadness and suffering that are just bad, that degrade us as humans, and would be better to never feel. QRI doesn’t believe in forcibly taking away voluntary suffering, or pushing bliss on people. But we would like to live in a world where people can choose to avoid such negative states, and on the margin, we believe it would be better for humanity for more people to be joyful, filled with a deep sense of well-being.

  • If dissonance is so negative, why is dissonance so important in music?

    • When you listen to very consonant music or consonant tones, you will quickly adapt to these sounds and get bored of them. This has nothing to do with consonance itself being unpleasant and everything to do with learning in the brain. Whenever you experience the same stimuli repeatedly, most brains will trigger a boredom mechanism and add dissonance of its own in order to make you enjoy the stimuli less or simply inhibit it, not allowing you to experience it at all. Semantic satiation is a classic example of this where repeating the same word over and over will make it lose its meaning. For this reason, to trigger many high valence states of consciousness consecutively, you need contrast. In particular, music works with gradients of consonance and dissonance, and in most cases, moving towards consonance is what feels good rather than the absolute value of consonance. Music tends to feel the best when you mix a high absolute value of consonance together with a very strong sense of moving towards an even higher absolute value of consonance. Playing some levels of dissonance during a song will later enhance the enjoyment of the more consonant parts such as the chorus of songs, which are reported to be the most euphoric parts of song and typically are extremely consonant.

  • What is QRI’s perspective on AI and AI safety research?

    • QRI thinks that consciousness research is critical for addressing AI safety. Without a precise way of quantifying an action’s impact on conscious experiences, we won’t be able to guarantee that an AI system has been programmed to act benevolently. Also, certain types of physical systems that perform computational tasks may be experiencing negative valence without any outside observer being aware of it. We need a theory of what produces unpleasant experiences to avoid inadvertently creating superintelligences that suffer intensely in the process of solving important problems or accidentally inflict large-scale suffering.

    • Additionally, we think that a very large percentage of what will make powerful AI dangerous is that the humans programming these machines and using these machines may be reasoning from states of loneliness, resentment, envy, or anger. By discovering ways to help humans transition away from these states, we can reduce the risks of AI by creating humans that are more ethical and aligned with consciousness more broadly. In short: an antidote for nihilism could lead to a substantial reduction in existential risk.

    • One way to think about QRI and AI safety is that the world is building AI, but doesn’t really have a clear, positive vision of what to do with AI. Lacking this, the default objective becomes “take over the world.” We think a good theory of consciousness could and will offer new visions of what kind of futures are worth building—new Schelling points that humanity (and AI researchers) could self-organize around.

  • Can digital computers implementing AI algorithms be conscious?

    • QRI is agnostic about this question. We have reasons to believe that digital computers in their current form cannot solve the phenomenal binding problem. Most of the activity in digital computers can be explained in a stepwise fashion in terms of localized processing of bits of information. Because of this, we believe that current digital computers could be creating fragments of qualia, but are unlikely to be creating strongly globally bound experiences. So, we consider the consciousness of digital computers unlikely, although given our current uncertainty over the Binding Problem (or alternatively framed, the Boundary Problem), this assumption is lightly held. In the previous question, when we write that “certain types of physical systems that perform computational tasks may be experiencing negative valence”, we assume that these hypothetical computers have some type of unified conscious experience as a result of having solved the phenomenal binding problem. For more on this topic, see: “What’s Out There?

  • How much mainstream recognition has QRI’s work received, either for this line of research or others? Has it published in peer-reviewed journals, received any grants, or garnered positive reviews from other academics?

    • We are collaborating with researchers from Johns Hopkins University and Stanford University on several studies involving the analysis of neuroimaging data of high-valence states of consciousness. Additionally, we are currently preparing two publications for peer-reviewed journals on topics from our core research areas. Michael Johnson will be presenting at this year’s MCS seminar series, along with Karl Friston, Anil Seth, Selen Atasoy, Nao Tsuchiya, and others; Michael Johnson, Andrés Gómez Emilsson, and Quintin Frerichs have also given invited talks at various east-coast colleges (Harvard, MIT, Princeton, and Dartmouth).

    • Some well-known researchers and intellectuals that are familiar and think positively about our work include: Robin Carhart-Harris, Scott Alexander, David Pearce, Steven Lehar, Daniel Ingram, and more. Scott Alexander acknowledged that QRI put together the paradigms that contributed to Friston’s integrative model of how psychedelics work before his research was published. Our track record so far has been to foreshadow (by several years in advance) key discoveries later proposed and accepted in mainstream academia. Given our current research findings, we expect this trend to continue in the years to come.


  • How does QRI know what is best for other people/animals? What about cultural relativism?

    • We think that, to a large extent, people and animals work under the illusion that they are pursuing intentional objects, states of the external environment, or relationships that they may have with the external environment. However, when you examine these situations closely, you realize that what we actually pursue are states of high valence triggered by external circumstances. There may be evolutionary and cultural selection pressures that push us toward self-deception as to how we actually function. And we consider it negative to have these selection pressures makes us less self-aware because it often focuses our energy on unpleasant, destructive, or fruitless strategies. QRI hopes to support people in fostering more self-awareness, which can come through experiments with one’s own consciousness, like meditation, as well as through the deeper theoretical understanding of what it is that we actually want.

  • How central is David Pearce’s work to the work of the QRI?

    • We consider David Pearce to be one of our core lineages. We particularly value his contribution to valence realism, the insistence that states of consciousness come with an overall valence, and that this is very morally relevant. We also consider David Pearce to be very influential in philosophy of mind; Pearce, for instance, coined the phrase ‘tyranny of the intentional object’, the title of a core QRI piece of the same name. We have been inspired by Pearce’s descriptions for what any scientific theory of consciousness should be able to explain, as well as his particular emphasis on the binding problem. David’s vision of a world animated by ‘gradients of bliss’ has also been very generative as a normative thought experiment which integrates human and non-human well-being. We do not necessarily agree with all of David Pearce’s work, but we respect him as an insightful and vivid thinker who has been brave enough to actually take a swing at describing utopia and who we believe is far ahead of his time.

  • What does QRI think of negative utilitarianism?

    • There’s general agreement within QRI that intense suffering is an extreme moral priority, and we’ve done substantial work on finding simple ways of getting rid of extreme suffering (with our research inspiring at least one unaffiliated startup to date). However, we find it premature to strongly endorse any pre-packaged ethical theory, especially because none of them are based on any formalism, but rather an ungrounded concept of ‘utility’. The value of information here seems enormous, and we hope that we can get to a point where the ‘correct’ ethical theory may simply ‘pop out of the equations’ of reality. It’s also important to highlight the fact that common versions and academic formulations of utilitarianism seem to be blind to many subtleties concerning valence. For example, they do not distinguish between mixed states of consciousness where you have extreme pleasure combined with extreme suffering in such a way that you judge the experience to be neither entirely suffering nor entirely happiness and states of complete neutrality, such as extreme white noise. Because most formulations of utilitarianism do not distinguish between them, we are generally suspicious of the idea that philosophers of ethics have considered all of the relevant attributes of consciousness in order to make accurate judgments about morality.

  • What does QRI think of philosophy of mind departments?

    • We believe that the problems that philosophy of mind departments address tend to be very disconnected from what truly matters from an ethical, moral, and philosophical point of view. For example, there is little appreciation of the value of bringing mathematical formalisms into discussions about the mind, or what that might look like in practice. Likewise there is close to no interest in preventing extreme suffering nor understanding its nature. Additionally, there is usually a disregard for extreme states of positive valence, and strange or exotic experiences in general. It may be the case that there are worthwhile things happening in departments and classes creating and studying this literature, but we find them characterized by processes which are unlikely to produce progress on their nominal purpose, creating a science of mind.

    • In particular, in academic philosophy of mind, we’ve seen very little regard for producing empirically testable predictions. There are millions of pages written about philosophy of mind, but the number of pages that provide precise, empirically testable predictions is quite thin.

  • What therapies does QRI recommend for depression, anxiety, and chronic pain?

    • At QRI, we do not make specific recommendations to individuals, but rather point to areas of research that we consider to be extremely important, tractable, and neglected, such as anti-tolerance drugs, neural annealing techniques, frequency specific microcurrent for kidney stone pain, and N,N-DMT and other tryptamines for cluster headaches and migraines.

  • Why does QRI think it’s so important to focus on ending extreme suffering? 

    • QRI thinks ending extreme suffering is important, tractable, and neglected. It’s important because of the logarithmic scales of pleasure and pain—the fact that extreme suffering is far worse by orders of magnitude than what people intuitively believe. It’s tractable because there are many types of extreme suffering that have existing solutions that are fairly trivial or at least have a viable path for being solved with moderately funded research programs. And it’s neglected mostly because people are unaware of the existence of these states, though not necessarily because of their rarity. For example, 10% of the population experiences kidney stones at some point in their life, but for reasons having to do with trauma, PTSD, and the state-dependence of memory, even people who have suffered from kidney stones do not typically end up dedicating their time or resources toward eradicating them.

    • It’s also likely that if we can meaningfully improve the absolute worst experiences, much of the knowledge we’ll gain in that process will translate into other contexts. In particular, we should expect to figure out how to make moderately depressed people happier, fix more mild forms of pain, improve the human hedonic baseline, and safely reach extremely great peak states. Mood research is not a zero-sum game. It’s a web of synergies.

Many thanks to Andrew Zuckerman, Mackenzie Dion, and Mike Johnson for their collaboration in putting this together. Featured image is QRI’s logo – animated by Hunter Meyer.

The QRI Ecosystem: Friends, Collaborators, Blogs, Media, and Adjacent Communities

The Qualia Research Institute has the vision of a world free from involuntary suffering in which conscious agents are empowered to have full control over their lived experiences. Its mission tackles this objective by combining foundational research on consciousness with a focus on explaining the mathematical properties of pleasure and pain for a full, formal account of valence.

By relating our mission to existing memeplexes, we could perhaps accurately describe the ethos of QRI as “Qualia Formalist Sentientist Effective Altruism“. That’s a mouthful. Let’s break it down:

  • Qualia Formalism refers to the notion that experience has a precise mathematical description that ties it with physics (for a more detailed breakdown see the Formalism section of the glossary).
  • Sentientism refers to the claim that value and disvalue are entirely expressed in the nature and quality of conscious experiences. In other words, that the only reason why states of affairs matter is because of the way in which they impact experiences.
  • Effective Altruism refers to the view that we can aspire to do the most good we can rather than settle for less. If you examine the actual extent to which different interventions cash out in terms of reduction in suffering throughout the world, you will notice that they follow a long-tail distribution. Thus, research on how to prioritize interventions really pays off. Focusing on the top interventions (and being willing to spend extra time digging for even better ones) can multiply your positive impact by orders of magnitude.

We could thus say that people and organizations are more or less aligned with QRI to the extent that they are aligned with each of these notions and their combinations thereof. More so, QRI also values the practice of rational psychonautics and the study of one’s own mind with meditation – hence we also include lists of rational psychonauts and great dharma teachers.

Find below the list of people and organizations that have a significant degree of alignment with QRI on each front. We also include a list of blogs and websites from readers of our work, which is meant to incentivize community-building around the aforementioned core ideas.


Name of Person/Organization – Blog/Website/Media [if any] (Representative Post of the Author- Sometimes Not from Their Primary Site [if any])

QRI Canon

Qualia Research Institute – QRI (Glossary)

Michael Edward Johnson – Open Theory (Neural Annealing)

Andrés Gómez Emilsson – Qualia Computing (Wireheading Done Right)

Current and Former QRI Employees and Collaborators Who Write About QRI Topics

Romeo Stevens – Neurotic Gradient Descent (Core Transformation)

Quintin Frerichs – The Youtopia Project (Wada Test + Phenomenal Puzzles)

Andrew “Zuck” Zuckerman – (Super Free Will)

Kenneth Shinozuka – Blank Horizons (A Future for Humanity)

Wendi Yan – (The Psychedelic Club)

Jeremy Hadfield – (How to Steal a Vibe)

Elin Ahlstrand – Mind Nomad (Floating Through First Fears)

Margareta Wassinge and Anders Amelin – Qualia Productions (When AI Means Advanced Incompetence)

List of current and former QRI collaborators and volunteers not listed above (in no particular order): Patrick Taylor, Hunter Meyer, Sean McGowan, Alex Zhao, Boian Etropolski, Robin Goins, Bence Vass, Brian Westerman, Jacob Shwartz-Lucas.

People and Organizations that Advocate for Sentientism and the Elimination of Suffering

David Pearce – (The Hedonistic Imperative)

Manu Herrán – (Psychological Biases that Impede the Success in the Reduction of Intense Suffering Movement)

Jonathan Leighton – (Why Access to Morphine is a Human Right)

Magnus Vinding – (Suffering-Focused Ethics: Defense and Implications)

Robert Daoust – (Review of Precursor Works)

Jacob Shwartz-Lucas – Invincible Wellbeing (Pleasure in the Brain)

Algosphere Alliance – (Vision)

Organization for the Prevention of Intense Suffering (OPIS) – (Cluster Headaches and Potential Therapies)

Sentience Research – (Algonomy)

People and Organizations Aligned with Qualia Formalism

Giulio Tononi – (Phi: A Voyage from the Brain to the Soul)

Steven Lehar – (Harmonic Resonance Theory)

Jonathan W. D. Mason – (Quasi-Conscious Multivariate System)

Johannes Kleiner – (Mathematical Consciousness Science)

Dan Lloyd – Labyrinth of Consciousness (The Music of Consciousness)

Luca Turin – A Spectroscopic Mechanism for Primary Olfactory Reception (The Science of Scent)

William Marshall – Google Scholar (PyPhi)

Larissa Albantakis – Google Scholar (Causal Composition)

Models of Consciousness Conference – (YouTube channel)

People and Organizations Aligned with Effective Altruism

Nick Bostrom – (What is a Singleton?)

Anders Sandberg – (Uriel’s Stacking Problem)

Toby Ord – (The Precipice)

80000 Hours – (We Could Feed All 8 Billion People Through a Nuclear Winter)

Future of Humanity Institute – (Publications)

Future of Life Institute – (AI Alignment Podcast: Identity and the AI Revolution with David Pearce and Andrés Gómez Emilsson)

Center on Long-Term Risk – (The Case for Suffering-Focused Ethics)

Rethink Priorities – (Invertebrate Welfare Cause Profile)

Happier Lives Institute – (Cause Profile: Mental Health)

Effective Altruism Forum – (Logarithmic Scales of Pleasure and Pain)

Rational Psychonautics

Steven Lehar – (The Grand Illusion)

James Kent – (The Control Interrupt Model of Psychedelic Action)

Alexander Shulgin – Shulgin Research Institute (Phenethylamines I Have Known And Loved)

Thomas S. Ray – Breadth and Depth (Psychedelics and the Human Receptorome)

Matthew Baggott – Beyond Fear: MDMA and Emotion (MDA and Contour Integration)

Psychonaut Wiki – (Visual Effects)

Psychedelic Replications – (Best of All Times Replicationsspecific floor tile example)

Great Dharma Teachers

Daniel M. Ingram – Integrated Daniel (No-Self vs. True Self)

Leigh Brasington – (Right Concentration)

Shinzen Young – (The Science of Enlightenment)

Culadasa – (Joy and Meditation)

QRI Friends and Supporters

Ryan Ferris and James Ormrod – The Good Timeline  (5-MeO-DMT, Paradise Engineering)

Adrian Nelson – Origins of Consciousness (Consciousness Blindness in Science Fiction)

Alex K. Chen – Quora (What are the long term effects of Adderall, Dexedrine, or Ritalin use?)

Andy Vargas – Neologos (Praxis for Open Individualism; Purpose Statement)

Tyger Gruber – (The Show)

Jacob Lyles – Jacob ex machina (Building a Better Anti-Capitalism)

Adjacent Communities, Organizations, and Allies

Scott Alexander – Slate Start Codex (Relaxed Beliefs Under Psychedelics and the Anarchic Brain)

Geoffrey Miller – Primal Poly (The Mating Mind: How sexual choice shaped the evolution of human nature)

Zvi Mowshowitz – Don’t Worry About the Vase (More Dakka)

Sarah Constantin – Multiple websites: 12, 3 (More Dakka in Medicine)

Scott Aaronson – (Why I Am Not An Integrated Information Theorist)

Gwern – (Iodine and IQ Meta-Analysis)

Venkatesh Rao – Ribbonfarm (Why We Slouch)

David Chapman – (Romantic Rebellion)

Atman Retreat – (FAQ)

Foresight Institute – (YouTube Channel)

Convergence Analysis – (List of Works)

Simulation Series – About (YouTube Channel)

Consciousness Hacking – (blog posts)

HeartMath Institute – (Chapter on Coherence)

The Wider World of People Who are Friends and Acquaintances of the QRI Ecosystem

Note: I asked (on social media) our readers to share their blogs and personal sites with us. Some of these links are very aligned with QRI and some are not. That said, together they represent a good sample of the memetic ecosystem that surrounds QRI. Namely, these links can be taken as a whole to be suggestive of “the memetic ground upon which QRI is founded”. Please feel free to share your blog or personal site in the comment section of this post.

Jack Foust – Welcome to the Symbolic Domain

Scott Jackisch – Oakland Futurist (Art as a Superweapon)

Maurits Luyben – Energy and Structure

Anonymous – deluks917 (What does ‘Actually Trying’ look like?)

Sameer Halai – (Toilet Paper Shortage is Not Caused by Hoarding)

Yohan John – (Some Wild Speculation On Goodhart’s Law And Its Manifestations In The Brain)

Jamie Joyce – The Society Library (Deconstructing the Logic of “Plandemic”)

João Mirage – YouTube Channel (The Mirror of the Spirit)

Natália Mendonça – Axiomatic Doubts (What Truths are Worth Seeking?)

Dustin Ali Francis Janatpour – Tales From Samarkand (The Inspector and the Crow)

Zarathustra Amadeus Goertzel – (Garden of Minds)

Duncan Sabien – Human Parts (In Defense of Punch Bug)

Brenda Esquivel – Abanico de Historias (La Reina Tamar y el Pájaro Condenado)

Vishnu Bachani – (Latent Possibilities of the Tonal System)

Martin Utheraptor Duřt – (Psychedelic Series)

Qiaochu Yuan – Thicket Forte (Monist Nihilism)

Jedediah Logan – Medium Account (Coping with Death During the COVID-19 Crisis)

Eliezer da Silva – (Prior Specification via Prior Predictive Matching)

Cassandra McClure – Lexicaldoll (On Save States)

Gaige Clark – / Querky Science (The Phoenix Effect)

Ben Finn – (Too much to do? Plan your day with Hopscotch [longer])

Michael Dello-Iacovo – (How I Renounced Christianity and Became Atheist)

Robin Hanson – Overcoming Bias (What Can Money Buy Directly?)

Katja Grace –, Worldly Positions, AI Impacts

Mundy Otto Reimer – (On Thermodynamics, Agency, and Living Systems)

Khuyen Bui – Medium Account (Beyond Ambition)

Jessica Watson Miller – Autotranslucence (Art as the Starting Point; Becoming a Magician)

Aella – (The Trauma Narrative)

Jacob Falkovich – Put a Number on It (The Scent of Bad Psychology)

Javi Otero – (Fractal Entrainment: A New Psychoacoustic Technology Inspired by Nature)

José Luis Ricón – Nintil

Eliot Redelman – BearLamp

Tee Barnett – (Are you a job search drone?)

Juan Fernandez Zaragoza – (Pandemia de Ideas)

Eric Layne – (The Antidote to a Global Crisis)

Kazi Adi Shakti – (Beyond Affirmation and Negation)

Pushan Kumar Datta – kaiserpush1 (Ramayana and Cognition of Self)

Yan Liu – Inflection Point (Seeing a World Unshackled from Neoclassical Economics)

Joseph Kelly – (Entrepreneurship is Metaphysical Labor)

Logan Thrasher Collins –

Malcolm Ocean – (Transcending Regrets, Problems, and Mistakes)

Jesse Parent – (Why ‘Be Yourself’ is Still Excellent Relationship Advice)

Milan Griffes – Flight From Perfection (Contemplative Practices, Optimal Stopping, Explore/Exploit)

Cody Kuiack – (The Holomorphic Self – Meditations)

Daniel Eth – (Quantum Computing for Morons)

Brian P. Ellis – (Refuting Dr. Erickson and Dr. Massihi)

John Greer – (The Three Buckets)

Finally: List of Other Relevant Lists

Effective Altruism Blogs –

LessWrong Wiki – List of Rationalist Diaspora Blogs

Effective Altruism Hub – (Resources)

Open Individualism Readings – r/OpenIndividualism (Wiki Reading List)

Phenomenal Binding Resources –

Physicalist Hotlinks –

Qualia Productions Presents: When AI Equals Advanced Incompetence

By Maggie and Anders Amelin

Letter I: Introduction

We are Maggie & Anders. A mostly harmless Swedish old-timer couple only now beginning to discover the advanced incompetence that is the proto-science — or “alchemy” — of consciousness research. A few centuries ago a philosopher of chemistry could have claimed with a straight face to be quite certain that a substance with negative mass had to be invoked to explain the phenomenon of combustion. Another could have been equally convinced that the chemistry of life involves a special force of nature absent from all non-living matter. A physicist of today may recognize that the study of consciousness has even less experimental foundation than alchemy did, yet be confident that at least it cannot feel like something to be a black hole. Since, obviously, black holes are simple objects and consciousness is a phenomenon which only emerges from “complexity” as high as that of a human brain.

Is there some ultimate substrate, basic to reality and which has properties intrinsic to itself? If so, is elementary sentience one of those properties? Or is it “turtles all the way down” in a long regress where all of reality can be modeled as patterns within patterns within patterns ending in Turing-style “bits”? Or parsimoniously never ending?

Will it turn out to be patterns all the way down, or sentience all the way up? Should people who believe themselves to perhaps be in an ancestor simulation take for granted that consciousness exists for biologically-based people in base-level reality? David Chalmers does. So at least that must be one assumption it is safe to make, isn’t it? And the one about no sentience existing in a black hole. And the one about phlogiston. And the four chemical elements.

This really is good material for silly comedy or artistic satire. To view a modest attempt by us in that direction, please feel encouraged to enjoy this youtube video we made with QRI in mind:

When ignorance is near complete, it is vital to think outside the proverbial box if progress is to be made. However, spontaneous creative speculation is more context-constrained than it feels like, and it rarely correlates all that beautifully with anything useful. Any science has to work via the baby steps of testable predictions. The integrated information theory (IIT) does just that, and has produced encouraging early results. IIT could turn out to be a good starting point for eventually mapping and modeling all of experiential phenomenology. For a perspective, IIT 3.0 may be comparable to how Einstein’s modeling of the photoelectric effect stands in relation to a full-blown theory of quantum gravity. There is a fair bit of ground to cover. We have not been able to find any group more likely than the QRI to speed up the process whereby humanity eventually manages to cover that ground. That is, if they get a whole lot of help in the form of outreach, fundraising and technological development. Early pioneers have big hurdles to overcome, but the difference they can make for the future is enormous.anders_and_maggie_thermometer

For those who feel inspired, a nice start is to go through all that is on or linked via the QRI website. Indulge in Principia Qualia. If that leaves you confused on a higher level, you are in good company. With us. We are halfway senile and are not information theorists, neuroscientists or physicists. All we have is a nerdy sense of humor and work experience in areas like marketing and planetary geochemistry. One thing we think we can do is help bridge the gap between “experts” and “lay people”. Instead of “explain it like I am five”, we offer the even greater challenge of explaining it like we are Maggie & Anders. Manage that, and you will definitely be wiser afterwards!

– Maggie & Anders

Letter II: State-Space of Matter and State-Space of Consciousness

A core aspect of science is the mapping out of distributions, spectra, and state-spaces of the building blocks of reality. Naturally occurring states of things can be spontaneously discovered. To gain more information about them, one can experimentally alter such states to produce novel ones, and then analyze them in a systematic way.

The full state-space of matter is multidimensional and vast. Zoom in anywhere in it and there will be a number of characteristic physics phenomena appearing there. Within a model of the state-space you can follow independent directions as you move towards regions and points. As an example, you can hold steady at one particular simple chemical configuration. Diamond, say. The stable region of diamond and its emergent properties like high hardness extends certain distances in other parameter directions such as temperature and pressure. The diamond region has neighboring regions with differently structured carbon, such as graphite. Diamond and graphite make for an interesting case since the property of hardness emerges very differently in the two regions. (In the pure carbon state-space the dimensions denoting amounts of all other elements can be said to be there but set to zero). Material properties like hardness can be modeled as static phenomena. According to IIT however, consciousness cannot. It’s still an emergent property of matter though, so just stay in the matter state-space and add a time dimension to it. Then open chains and closed loops of causation emerge as a sort of fundamental level of what matter “does”. Each elementary step of causation may be regarded to produce or intrinsically be some iota of proto-experience. In feedback loops this self-amplifies into states of feeling like something. Many or perhaps most forms of matter can “do” these basic things at various regions of various combinations of parameter settings. Closed causal loops require more delicate fine-tuning in parameter space, so the state-space of nonconscious causation structure is larger than that of conscious structure. The famous “hard problem” has to do with the fact that both an experientially very weak and a very strong state can emerge from the same matter (shown to be the case so far only within brains). A bit like the huge difference in mechanical hardness of diamond and graphite both emerging from the same pure carbon substrate (a word play on “hard” to make it sticky).

By the logic of IIT it should be possible to model (in arbitrarily coarse or fine detail) the state-space of all conscious experience whose substrate is all possible physical states of pure carbon. Or at room temperature in any material. And so on. If future advanced versions of IIT turn out to be a success then we may guess there’ll be a significant overlap to allow for a certain “substrate invariance” for hardware that can support intelligence with human-recognizable consciousness. Outside of that there will be a gargantuan additional novel space to explore. It ought to contain maxima of (intrinsic) attractiveness, none of which need to reside within what a biological nervous system can host. Biological evolution has only been able to search through certain parts of the state-space of matter. One thing it has not worked with on Earth is pure carbon. Diamond tooth enamel or carbon nanotube tendons would be useful but no animal has them. What about conscious states? Has biology come close to hit upon any of the optima in those? If all of human sentience is like planet Earth, and all of Terrestrial biologically-based sentience is like the whole Solar System, that leaves an entire extrasolar galaxy out there to explore. (Boarding call: Space X Flight 42 bound for Nanedi Settlement, Mars. Sentinauts please go to the Neuralink check-in terminal).

Of course we don’t currently know how IIT is going to stand up, but thankfully it does make testable predictions. There is, therefore, a beginning of something to be hoped for with it. In a hopeful scenario IIT turns out to be like special relativity, and what QRI is reaching for is like quantum gravity. It will be a process of taking baby steps, for sure. But each step is likely to bring benefits in many ways.

Is any of this making you curious? Then you may enjoy reading “Principia Qualia” and other QRI articles.

– Maggie & Anders

Breaking Down the Problem of Consciousness

Below you will find three different breakdowns for what a scientific theory of consciousness must be able to account for, formulated in slightly different ways.

First, David Pearce posits these four fundamental questions (the simplicity of this breakdown comes with the advantage that it might be the easiest to remember):

  1. The existence of consciousness
  2. The causal and computational properties of experience (including why we can even talk about consciousness to begin with, why consciousness evolved in animals, etc.)
  3. The nature and interrelationship between all the qualia varieties and values (why does scent exist? and in exactly what way is it related to color qualia?)
  4. The binding problem (why are we not “mind dust” if we are made of atoms)

David Pearce’s Four Questions Any Scientific Theory of Consciousness Must Be Able to Answer

Second, we have Giulio Tononi‘s IIT:

  1. The existence of consciousness
  2. The composition of consciousness (colors, shapes, etc.)
  3. Its information content (the fact each experience is “distinct”)
  4. The unity of consciousness (why does seeing the color blue does not only change a part of your visual field, but in some sense it changes the experience as a whole?)
  5. The borders of experience (also called ‘exclusion principle’; that each experience excludes everything not in it; presence of x implies negation of ~x)

Giulio Tononi’s 5 Axioms of Consciousness

Finally, Michael Johnson breaks it down in terms of what he sees as a set of what ultimately are tractable problems. As a whole the problem of consciousness may be conceptually daunting and scientifically puzzling, but this framework seeks to paint a picture of what a solution should look like. These are:

  1. Reality mapping problem (what is the formal ontology that can map reality to consciousness?)
  2. Substrate problem (in such an ontology, which objects and processes contribute to consciousness?)
  3. Boundary problem (akin to the binding problem, but reformulated to be agnostic about an atomistic ontology of systems)
  4. Scale problem (how to connect the scale of our physical ontology with the spatio-temporal scale at which experiences happen?)
  5. Topology of information problem (how do we translate the physical information inside the boundary into the adequate mathematical object used in our formalism?)
  6. State-space problem (what mathematical features does each qualia variety, value, and binding architecture correspond to?)
  7. Translation problem (starting with the mathematical object corresponding to a specific experience within the correct formalism, how do you derive the phenomenal character of the experience?)
  8. Vocabulary problem (how can we improve language to talk directly about natural kinds?)

Michael Johnson’s 8 Problems of Consciousness

Each of these different breakdowns have advantages and disadvantages. But I think that they are all very helpful and capable of improving the way we understand consciousness. While pondering about the “hard problem of consciousness” can lead to fascinating and strange psychological effects (much akin to asking the question “why is there something rather than nothing?”), addressing the problem space at a finer level of granularity almost always delivers better results. In other words, posing the “hard problem” is less useful than decomposing the question into actually addressable problems. The overall point being that by doing so one is in some sense actually trying to understand rather than merely restating one’s own confusion.

Do you know of any other such breakdown of the problem space?