A Non-Circular Solution to the Measurement Problem: If the Superposition Principle is the Bedrock of Quantum Mechanics Why Do We Experience Definite Outcomes?

Source: Quora question – “Scientifically speaking, how serious is the measurement problem concerning the validity of the various interpretations in quantum mechanics?


David Pearce responds [emphasis mine]:

It’s serious. Science should be empirically adequate. Quantum mechanics is the bedrock of science. The superposition principle is the bedrock of quantum mechanics. So why don’t we ever experience superpositions? Why do experiments have definite outcomes? “Schrödinger’s cat” isn’t just a thought-experiment. The experiment can be done today. If quantum mechanics is complete, then microscopic superpositions should rapidly be amplified via quantum entanglement into the macroscopic realm of everyday life.

Copenhagenists are explicit. The lesson of quantum mechanics is that we must abandon realism about the micro-world. But Schrödinger’s cat can’t be quarantined. The regress spirals without end. If quantum mechanics is complete, the lesson of Schrödinger’s cat is that if one abandons realism about a micro-world, then one must abandon realism about a macro-world too. The existence of an objective physical realm independent of one’s mind is certainly a useful calculational tool. Yet if all that matters is empirical adequacy, then why invoke such superfluous metaphysical baggage? The upshot of Copenhagen isn’t science, but solipsism.

There are realist alternatives to quantum solipsism. Some physicists propose that we modify the unitary dynamics to prevent macroscopic superpositions. Roger Penrose, for instance, believes that a non-linear correction to the unitary evolution should be introduced to prevent superpositions of macroscopically distinguishable gravitational fields. Experiments to (dis)confirm the Penrose-Hameroff Orch-OR conjecture should be feasible later this century. But if dynamical collapse theories are wrong, and if quantum mechanics is complete (as most physicists believe), then “cat states” should be ubiquitous. This doesn’t seem to be what we experience.

Everettians are realists, in a sense. Unitary-only QM says that there are quasi-classical branches of the universal wavefunction where you open an infernal chamber and see a live cat, other decohered branches where you see a dead cat; branches where you perceive the detection of a spin-up electron that has passed through a Stern–Gerlach device, other branches where you perceive the detector recording a spin-down electron; and so forth. I’ve long been haunted by a horrible suspicion that unitary-only QM is right, though Everettian QM boggles the mind (cfUniverseSplitter). Yet the heart of the measurement problem from the perspective of empirical science is that one doesn’t ever see superpositions of live-and-dead cats, or detect superpositions of spin-up-and-spin-down electrons, but only definite outcomes. So the conjecture that there are other, madly proliferating decohered branches of the universal wavefunction where different versions of you record different definite outcomes doesn’t solve the mystery of why anything anywhere ever seems definite to anyone at all. Therefore, the problem of definite outcomes in QM isn’t “just” a philosophical or interpretational issue, but an empirical challenge for even the most hard-nosed scientific positivist. “Science” that isn’t empirically adequate isn’t science: it’s metaphysics. Some deeply-buried background assumption(s) or presupposition(s) that working physicists are making must be mistaken. But which? To quote the 2016 International Workshop on Quantum Observers organized by the IJQF,

“…the measurement problem in quantum mechanics is essentially the determinate-experience problem. The problem is to explain how the linear quantum dynamics can be compatible with the existence of our definite experience. This means that in order to finally solve the measurement problem it is necessary to analyze the observer who is physically in a superposition of brain states with definite measurement records. Indeed, such quantum observers exist in all main realistic solutions to the measurement problem, including Bohm’s theory, Everett’s theory, and even the dynamical collapse theories. Then, what does it feel like to be a quantum observer?

Indeed. Here I’ll just state rather than argue my tentative analysis.
Monistic physicalism is true. Quantum mechanics is formally complete. There is no consciousness-induced collapse the wave function, no “hidden variables”, nor any other modification or supplementation of the unitary Schrödinger dynamics. The wavefunction evolves deterministically according to the Schrödinger equation as a linear superposition of different states. Yet what seems empirically self-evident, namely that measurements always find a physical system in a definite state, is false(!) The received wisdom, repeated in countless textbooks, that measurements always find a physical system in a definite state reflects an erroneous theory of perception, namely perceptual direct realism. As philosophers (e.g. the “two worlds” reading of Kant) and even poets (“The brain is wider than the sky…”) have long realised, the conceptual framework of perceptual direct realism is untenable. Only inferential realism about mind-independent reality is scientifically viable. Rather than assuming that superpositions are never experienced, suspend disbelief and consider the opposite possibility. Only superpositions are ever experienced. “Observations” are superpositions, exactly as unmodified and unsupplemented quantum mechanics says they should be: the wavefunction is a complete representation of the physical state of a system, including biological minds and the pseudo-classical world-simulations they run. Not merely “It is the theory that decides what can be observed” (Einstein); quantum theory decides the very nature of “observation” itself. If so, then the superposition principle underpins one’s subjective experience of definite, well-defined classical outcomes (“observations”), whether, say, a phenomenally-bound live cat, or the detection of a spin-up electron that has passed through a Stern–Gerlach device, or any other subjectively determinate outcome. If one isn’t dreaming, tripping or psychotic, then within one’s phenomenal world-simulation, the apparent collapse of a quantum state (into one of the eigenstates of the Hermitian operator associated with the relevant observable in accordance with a probability calculated as the squared absolute value of a complex probability amplitude) consists of fleeting uncollapsed neuronal superpositions within one’s CNS. To solve the measurement problem, the neuronal vehicle of observation and its subjective content must be distinguished. The universality of the superposition principle – not its unexplained breakdown upon “observation” – underpins one’s classical-seeming world-simulation. What naïvely seems to be the external world, i.e. one’s egocentric world-simulation, is what linear superpositions of different states feel like “from the inside”: the intrinsic nature of the physical. The otherwise insoluble binding problem in neuroscience and the problem of definite outcomes in QM share a solution.

Absurd?
Yes, for sure: this minimum requirement for a successful resolution of the mystery is satisfied (“If at first the idea is not absurd, then there is no hope for it”– Einstein, again). The raw power of environmentally-induced decoherence in a warm environment like the CNS makes the conjecture intuitively flaky. Assuming unitary-only QM, the effective theoretical lifetime of neuronal “cat states” in the CNS is less than femtoseconds. Neuronal superpositions of distributed feature-processors are intuitively just “noise”, not phenomenally-bound perceptual objects. At best, the idea that sub-femtosecond neuronal superpositions could underpin our experience of law-like classicality is implausible. Yet we’re not looking for plausible theories but testable theories. Every second of selection pressure in Zurek’s sense (cf. “Quantum Darwinism”) sculpting one’s neocortical world-simulation is more intense and unremitting than four billion years of evolution as conceived by Darwin. My best guess is that interferometry will disclose a perfect structural match. If the non-classical interference signature doesn’t yield a perfect structural match, then dualism is true.

Is the quantum-theoretic version of the intrinsic nature argument for non-materialist physicalism – more snappily, “Schrödinger’s neurons” – a potential solution to the measurement problem? Or a variant of the “word salad” interpretation of quantum mechanics?
Sadly, I can guess.
But if there were one experiment that I could do, one loophole I’d like to see closed via interferometry, then this would be it.


 

Psychedelic Turk: A Platform for People on Altered States of Consciousness

An interesting variable is how much external noise is optimal for peak processing. Some, like Kafka, insisted that “I need solitude for my writing; not ‘like a hermit’ – that wouldn’t be enough – but like a dead man.” Others, like von Neumann, insisted on noisy settings: von Neumann would usually work with the TV on in the background, and when his wife moved his office to a secluded room on the third floor, he reportedly stormed downstairs and demanded “What are you trying to do, keep me away from what’s going on?” Apparently, some brains can function with (and even require!) high amounts of sensory entropy, whereas others need essentially zero. One might look for different metastable thresholds and/or convergent cybernetic targets in this case.

– Mike Johnson, A future for neuroscience

My drunk or high Tweets are my best work.

– Joe Rogan, Vlog#18

Introduction

Mechanical Turk is a service that makes outsourcing simple tasks to a large number of people extremely easy. The only constraint is that the tasks outsourced ought to be the sort of thing that can be explained and performed within a browser in less than 10 minutes, which in practice is not a strong constraint for most tasks you would outsource anyway. This service is in fact a remarkably effective way to accelerate the testing of digital prototypes at a reasonable price.

I think the core idea has incredible potential in the field of interest we explore in this blog. Namely, consciousness research and the creation of consciousness technologies. Mechanical Turk is already widely used in psychology, but its usefulness could be improved further. Here is an example: Imagine an extension to Mechanical Turk in which one could choose to have the tasks completed (or attempted) by people in non-ordinary states of consciousness.

Demographic Breakdown

With Mechanical Turk you can already ask for people who belong to specific demographic categories to do your task. For example, some academics are interested in the livelihoods of people within certain ages, NLP researchers might need native speakers of a particular language, and people who want to proof-read a text may request users who have completed an undergraduate degree. The demographic categories are helpful but also coarse. In practice they tend to be used as noisy proxies for more subtle attributes. If we could multiply the categories, which ones would give the highest bang for the buck? I suspect there is a lot of interesting information to be gained from adding categories like personality, cognitive organization, and emotional temperament. What else?

States of Consciousness as Points of View

One thing to consider is that the value of a service like Mechanical Turk comes in part from the range of “points of view” that the participants bring. After all, ensemble models that incorporate diverse types of modeling approaches and datasets usually dominate in real-world machine learning competitions (e.g. Kaggle). Analogously, for a number of applications, getting feedback from someone who thinks differently than everyone already consulted is much more valuable than consulting hundreds of people similar to those already queried. Human minds, insofar as they are prediction machines, can be used as diverse models. A wide range of points of view expands the perspectives used to draw inferences, and in many real-world conditions this will be beneficial for the accuracy of an aggregated prediction. So what would a radical approach to multiplying such “points of view” entail? Arguably a very efficient way of doing so would involve people who inhabit extraordinarily different states of consciousness outside the “typical everyday” mode of being.

Jokingly, I’d very much like to see the “wisdom of the crowds enhanced with psychedelic points of view” expressed in mainstream media. I can imagine an anchorwoman on CNN saying: “according to recent polls 30% of people agree that X, now let’s break this down by state of consciousness… let’s see what the people on acid have to say… ” I would personally be very curious to hear how “the people on acid” are thinking about certain issues relative to e.g. a breakdown of points of view by political affiliation. Leaving jokes aside, why would this be a good idea? Why would anyone actually build this?

I posit that a “Mechanical Turk for People on Psychedelics” would benefit the requesters, the workers, and outsiders. Let’s start with the top three benefits for requesters: better art and marketing, enhanced problem solving, and accelerating the science of consciousness. For workers, the top reason would be making work more interesting, stimulating, and enjoyable. And from the point of view of outsiders, we could anticipate some positive externalities such as improved foundational science, accelerated commercial technology development, and better prediction markets. Let’s dive in:

Benefits to Requesters

Art and Marketing

A reason why a service like this might succeed commercially comes from the importance of understanding one’s audience in art and marketing. For example, if one is developing a product targeted to people who have a hangover (e.g. “hangover remedies”), one’s best bet would be to see how people who actually are hungover resonate with the message. Asking people who are drunk, high on weed, on empathogenic states, on psychedelics, specific psychiatric medications, etc. could certainly find its use in marketing research for sports, comedy, music shows, etc.

Basically, when the product is consumed in the sort of events in which people frequently avoid being sober for the occasion, doing market research on the same people sober might produce misleading results. What percent of concert-goers are sober the entire night? Or people watching the World Cup final? Clearly, a Mechanical Turk service with diverse states of consciousness has the potential to improve marketing epistemology.

On the art side, people who might want to be the next Alex Grey or Android Jones would benefit from prototyping new visual styles on crowds of people who are on psychedelics (i.e. the main consumers of such artistic styles).

As an aside, I would like to point out that in my opinion, artists who create audio or images that are expected to be consumed by people in altered states of consciousness have some degree of responsibility in ensuring that they are not particularly upsetting to people in such states. Indeed, some relatively innocent sounds and images might cause a lot of anxiety or trigger negative states in people on psychedelics due to the way they are processed in such states. With a Mechanical Turk for psychedelics, artists could reduce the risk of upsetting festival/concert goers who partake in psychedelic perception by screening out offending stimuli.

Problem Solving

On a more exciting note, there are a number of indications that states of consciousness as alien as those induced by major psychedelics are at times computationally suited to solve information processing tasks in competitive ways. Here are two concrete examples: First, in the sixties there was some amount of research performed on psychedelics for problem solving. A notorious example would be the 1966 study conducted by Willis Harman & James Fadiman in which mescaline was used to aid scientists, engineers, and designers in solving concrete technical problems with very positive outcomes. And second, in How to Secretly Communicate with People on LSD we delved into ways that messages could be encoded in audio-visual stimuli in such a way that only people high on psychedelics could decode them. We called this type of information concealment Psychedelic Cryptography:

These examples are just proofs of concept that there probably are a multitude of tasks for which minds under various degrees of psychedelic alteration outperform those minds in sober states. In turn, it may end up being profitable to recruit people on such states to complete your tasks when they are genuinely better at them than the sober competition. How to know when to use which state of consciousness? The system could include an algorithm that samples people from various states of consciousness to identify the most promising states to solve your particular problem and then assign the bulk of the task to them.

All of this said, the application I find the most exciting is…

Accelerating the Science of Consciousness

The psychedelic renaissance is finally getting into the territory of performance enhancement in altered states. For example, there is an ongoing study that evaluates how microdosing impacts how one plays Go, and another one that uses a self-blinding protocol to assess how microdosing influences cognitive abilities and general wellbeing.

A whole lot of information about psychedelic states can be gained by doing browser experiments with people high on them. From sensory-focused studies such as visual psychophysics and auditory hedonics to experiments involving higher-order cognition and creativity, internet-based studies of people on altered states can shed a lot of light on how the mind works. I, for one, would love to estimate the base-rate of various wallpaper symmetry groups in psychedelic visuals (cf. Algorithmic Reduction of Psychedelic States), and to study the way psychedelic states influence the pleasantness of sound. There may be no need to spend hundreds of thousands of dollars in experiments that study those questions when the cost of asking people who are on psychedelics to do tasks can be amortized by having them participate in hundreds of studies on e.g. a single LSD session.

Quantifying Bliss (36)

17 wallpaper symmetry groups

This kind of research platform would also shed light on how experiences of mental illness compare with altered states of consciousness and allow us to place the effects of common psychiatric medications on a common “map of mental states”. Let me explain. While recreational materials tend to produce the largest changes to people’s conscious experience, it should go without saying that a whole lot of psychiatric medications have unusual effects on one’s state of consciousness. For example: Most people have a hard time pin-pointing the effect of beta blockers on their experience, but it is undeniable that such compounds influence brain activity and there are suggestions that they may have long-term mood effects. Many people do report specific changes to their experience related to beta blockers, and experienced psychonauts can often compare their effects to other drugs that they may use as benchmarks. By conducting psychophysical experiments on people who are taking various major psychoactives, one would get an objective benchmark for how the mind is altered along a wide range of dimensions by each of these substances. In turn, this generalized Mechanical Turk would enable us to pin-point where much more subtle drugs fall along on this space (cf. State-Space of Drug Effects).

In other words, this platform may be revolutionary when it comes to data collection and bench-marking for psychiatric drugs in general. That said, since these compounds are more often than not used daily for several months rather than briefly or as needed, it would be hard to see how the same individual performs a certain task while on and off the medicine. This could be addressed by implementing a system allowing requesters to ask users for follow up experiments if/when the user changes his or her drug regimen.

Benefit to Users

As claimed earlier on, we believe that this type of platform would make work more enjoyable, stimulating, and interesting for many users. Indeed, there does seem to be a general trend of people wanting to contribute to science and culture by sharing their experiences in non-ordinary states of consciousness. For instance, the wonderful artists at r/replications try to make accurate depiction of various unusual states of consciousness for free. There is even an initiative to document the subjective effects of various compounds by grounding trip reports on a subjective effects index. The point being that if people are willing to share their experience and time on psychedelic states of consciousness for free, chances are that they will not complain if they can also earn money with this unusual hobby.

698okoc

LSD replication (source: r/replications)

We also know from many artists and scientists that normal everyday states of consciousness are not always the best for particular tasks. By expanding the range of states of consciousness with economic advantages, we would be allowing people to perform at their best. You may not be allowed to conduct your job while high at your workplace even if you perform it better that way. But with this kind of platform, you would have the freedom to choose the state of consciousness that optimizes your performance and be paid in kind.

Possible Downsides

It is worth mentioning that there would be challenges and negative aspects too. In general, we can probably all agree that it would suck to have to endure advertisement targeted to your particular state of consciousness. If there is a way to prevent this from happening I would love to hear it. Unfortunately, I assume that marketing will sooner or later catch on to this modus operandi, and that a Mechanical Turk for people on altered states would be used for advertisement before anything else. Making better targeted ads, it turns out, is a commercially viable way of bootstrapping all sorts of novel systems. But better advertisement indeed puts us at higher risk of being taken over by pure replicators in the broader scope, so it is worth being cautious with this application.

In the worst case scenario, we discover that very negative states of consciousness dominate other states in the arena of computational efficiency. In this scenario, the abilities useful to survive in the mental economy of the future happen to be those that employ suffering in one way or another. In that case, the evolutionary incentive gradients would lead to terrible places. For example, future minds might end up employing massive amounts of suffering to “run our servers”, so to speak. Plus, these minds would have no choice because if they don’t then they would be taken over by other minds that do, i.e. this is a race to the bottom. Scenarios like this have been considered before (1, 2, 3), and we should not ignore their warning signs.

Of course this can only happen if there are indeed computational benefits to using consciousness for information processing tasks to begin with. At Qualia Computing we generally assume that the unity of consciousness confers unique computational benefits. Hence, I would expect any outright computational use of states of consciousness is likely to involve a lot of phenomenal binding. Hence, at the evolutionary limit, conscious super-computers would probably be super-sentient. That said, the optimal hedonic tone of the minds with the highest computational efficiency is less certain. This complex matter will be dealt with elsewhere.

Concluding Discussion

Reverse Engineering Systems

A lot of people would probably agree that a video of Elon Musk high on THC may have substantially higher value than many videos of him sober. A lot of this value comes from the information gained about him by having a completely new point of view (or projection) of his mind. Reverse-engineering systems involves doing things to them to change the way they operate in order to try to reconstruct how they are put together. The same is true for the mind and the computational benefits of consciousness more broadly.

The Cost of a State of Consciousness

Another important consideration would be cost assignment for different states of consciousness. I imagine that the going rates for participants on various states would highly depend on the kind of application and profitability of these states. The price would reach a stable point that balances the usability of a state of consciousness for various tasks (demand) and its overall supply.

For problem solving in some specialized applications, for example, I could imagine “mathematician on DMT” to be a high-end sort of state of consciousness priced very highly. For example, foundational consciousness research and phenomenological studies might find such participants to be extremely valuable, as they might be helpful analyzing novel mathematical ideas and using their mathematical expertise to describe the structure of such experiences (cf. Hyperbolic Geometry of DMT Experiences).

Unfortunately, if the demand for high-end rational psychonauts never truly picks up, one might expect that people who could become professional rational psychonauts will instead work for Google or Facebook or some other high-paying company. More so, due to Lemon Markets people who do insist on hiring rational psychonauts will most likely be disappointed. Sasha Shulgin and his successors will probably only participate in such markets if the rewards are high enough to justify using their precious time on novel alien states of consciousness to do your experiment rather than theirs.

In the ideal case this type of platform might function as a spring-board to generate a critical mass of active rational psychonauts who could do each other’s experiments and replicate the results of underground researchers.

Quality Metrics

Accurately matching the task with the state of consciousness would be critical. For example, you might not necessarily want someone who is high on a large dose of acid to take a look at your tax returns*. Perhaps for mundane tasks one would want people who are on states of optimal arousal (e.g. modafinil). As mentioned earlier, a system that identifies the most promising states of consciousness for your task would be a key feature of the platform.

If we draw inspiration from the original service, we could try to make an analogous system to “Mechanical Turk Masters“. Here the service charges a higher price for requesting people who have been vetted as workers who produce high quality output. To be a Master one needs to have a high task-approval rating and have completed an absurd number of them. Perhaps top score boards and public requester prices for best work would go a long way in keeping the quality of psychedelic workers at a high level.

In practice, given the population base of people who would use this service, I would predict that to a large extent the most successful tasks in terms of engagement from the user-base will be those that have nerd-sniping qualities.** That is, make tasks that are especially fun to complete on psychedelics (and other altered states) and you would most likely get a lot of high quality work. In turn, this platform would generate the best outcomes when the tasks submitted are both fun and useful (hence benefiting both workers and requesters alike).

Keeping Consciousness Useful

Finally, we think that this kind of platform would have a lot of long-term positive externalities. In particular, making a wider range of states of consciousness economically useful goes in the general direction of keeping consciousness relevant in the future. In the absence of selection pressures that make consciousness economically useful (and hence useful to stay alive and reproduce), we can anticipate a possible drift from consciousness being somewhat in control (for now) to a point where only pure replicators matter.


Bonus content

If you are concerned with social power in a post-apocalyptic landscape, it is important that you figure out a way to induce psychedelic experiences in such a way that they cannot easily be used as weapons. E.g. it would be key to only have physiologically safe (e.g. not MDMA) and low-potency (e.g. not LSD) materials in a Mad Max scenario. For the love of God, please avoid stockpiling compounds that are both potent and physiologically dangerous (e.g. NBOMes) in your nuclear bunker! Perhaps high-potency materials could still work out if they are blended in hard-to-separate ways with fillers, but why risk it? I assume that becoming a cult leader would not be very hard if one were the only person who can procure reliable mystical experiences for people living in most post-apocalyptic scenarios. For best results make sure that the cause of the post-apocalyptic state of the world is a mystery to its inhabitants, such as in the documentary Gurren Lagann, and the historical monographs written by Philip K. Dick.


*With notable exceptions. For example, some regular cannabis users do seem to concentrate better while on manageable amounts of THC, and if the best tax attorney in your vicinity willing to do your taxes is in this predicament, I’d suggest you don’t worry too much about her highness.

**If I were a philosopher of science I would try to contribute a theory for scientific development based on nerd-sniping. Basically, how science develops is by the dynamic way in which scientists at all points are following the nerd-sniping gradient. Scientists are typically people who have their curiosity lever all the way to the top. It’s not so much that they choose their topics strategically or at random. It’s not so much a decision as it is a compulsion. Hence, the sociological implementation of science involves a collective gradient ascent towards whatever is nerd-sniping given the current knowledge. In turn, the generated knowledge from the intense focus on some area modifies what is known and changes the nerd-sniping landscape, and science moves on to other topics.

The Qualia Explosion

Extract from “Humans and Intelligent Machines: Co-Evolution, Fusion or Replacement?” (talk) by David Pearce

Supersentience: Turing plus Shulgin?

Compared to the natural sciences (cf. the Standard Model in physics) or computing (cf. the Universal Turing Machine), the “science” of consciousness is pre-Galilean, perhaps even pre-Socratic. State-enforced censorship of the range of subjective properties of matter and energy in the guise of a prohibition on psychoactive experimentation is a powerful barrier to knowledge. The legal taboo on the empirical method in consciousness studies prevents experimental investigation of even the crude dimensions of the Hard Problem, let alone locating a solution-space where answers to our ignorance might conceivably be found.

Singularity theorists are undaunted by our ignorance of this fundamental feature of the natural world. Instead, the Singularitarians offer a narrative of runaway machine intelligence in which consciousness plays a supporting role ranging from the minimal and incidental to the completely non-existent. However, highlighting the Singularity movement’s background assumptions about the nature of mind and intelligence, not least the insignificance of the binding problem to AGI, reveals why FUSION and REPLACEMENT scenarios are unlikely – though a measure of “cyborgification” of sentient biological robots augmented with ultrasmart software seems plausible and perhaps inevitable.

If full-spectrum superintelligence does indeed entail navigation and mastery of the manifold state-spaces of consciousness, and ultimately a seamless integration of this knowledge with the structural understanding of the world yielded by the formal sciences, then where does this elusive synthesis leave the prospects of posthuman superintelligence? Will the global proscription of radically altered states last indefinitely?

Social prophecy is always a minefield. However, there is one solution to the indisputable psychological health risks posed to human minds by empirical research into the outlandish state-spaces of consciousness unlocked by ingesting the tryptaminesphenylethylaminesisoquinolines and other pharmacological tools of sentience investigation. This solution is to make “bad trips” physiologically impossible – whether for individual investigators or, in theory, for human society as a whole. Critics of mood-enrichment technologies sometimes contend that a world animated by information-sensitive gradients of bliss would be an intellectually stagnant society: crudely, a Brave New World. On the contrary, biotech-driven mastery of our reward circuitry promises a knowledge explosion in virtue of allowing a social, scientific and legal revolution: safe, full-spectrum biological superintelligence. For genetic recalibration of hedonic set-points – as distinct from creating uniform bliss – potentially leaves cognitive function and critical insight both sharp and intact; and offers a launchpad for consciousness research in mind-spaces alien to the drug-naive imagination. A future biology of invincible well-being would not merely immeasurably improve our subjective quality of life: empirically, pleasure is the engine of value-creation. In addition to enriching all our lives, radical mood-enrichment would permit safe, systematic and responsible scientific exploration of previously inaccessible state-spaces of consciousness. If we were blessed with a biology of invincible well-being, exotic state-spaces would all be saturated with a rich hedonic tone.

Until this hypothetical world-defining transition, pursuit of the rigorous first-person methodology and rational drug-design strategy pioneered by Alexander Shulgin in PiHKAL and TiHKAL remains confined to the scientific counterculture. Investigation is risky, mostly unlawful, and unsystematic. In mainstream society, academia and peer-reviewed scholarly journals alike, ordinary waking consciousness is assumed to define the gold standard in which knowledge-claims are expressed and appraised. Yet to borrow a homely-sounding quote from Einstein, “What does the fish know of the sea in which it swims?” Just as a dreamer can gain only limited insight into the nature of dreaming consciousness from within a dream, likewise the nature of “ordinary waking consciousness” can only be glimpsed from within its confines. In order to scientifically understand the realm of the subjective, we’ll need to gain access to all its manifestations, not just the impoverished subset of states of consciousness that tended to promote the inclusive fitness of human genes on the African savannah.

Why the Proportionality Thesis Implies an Organic Singularity

So if the preconditions for full-spectrum superintelligence, i.e. access to superhuman state-spaces of sentience, remain unlawful, where does this roadblock leave the prospects of runaway self-improvement to superintelligence? Could recursive genetic self-editing of our source code repair the gap? Or will traditional human personal genomes be policed by a dystopian Gene Enforcement Agency in a manner analogous to the coercive policing of traditional human minds by the Drug Enforcement Agency?

Even in an ideal regulatory regime, the process of genetic and/or pharmacological self-enhancement is intuitively too slow for a biological Intelligence Explosion to be a live option, especially when set against the exponential increase in digital computer processing power and inorganic AI touted by Singularitarians. Prophets of imminent human demise in the face of machine intelligence argue that there can’t be a Moore’s law for organic robots. Even the Flynn Effect, the three-points-per-decade increase in IQ scores recorded during the 20th century, is comparatively puny; and in any case, this narrowly-defined intelligence gain may now have halted in well-nourished Western populations.

However, writing off all scenarios of recursive human self-enhancement would be premature. Presumably, the smarter our nonbiological AI, the more readily AI-assisted humans will be able recursively to improve our own minds with user-friendly wetware-editing tools – not just editing our raw genetic source code, but also the multiple layers of transcription and feedback mechanisms woven into biological minds. Presumably, our ever-smarter minds will be able to devise progressively more sophisticated, and also progressively more user-friendly, wetware-editing tools. These wetware-editing tools can accelerate our own recursive self-improvement – and manage potential threats from nonfriendly AGI that might harm rather than help us, assuming that our earlier strictures against the possibility of digital software-based unitary minds were mistaken. MIRI rightly call attention to how small enhancements can yield immense cognitive dividends: the relatively short genetic distance between humans and chimpanzees suggests how relatively small enhancements can exert momentous effects on a mind’s general intelligence, thereby implying that AGIs might likewise become disproportionately powerful through a small number of tweaks and improvements. In the post-genomic era, presumably exactly the same holds true for AI-assisted humans and transhumans editing their own minds. What David Chalmers calls the proportionality thesis, i.e. increases in intelligence lead to proportionate increases in the capacity to design intelligent systems, will be vindicated as recursively self-improving organic robots modify their own source code and bootstrap our way to full-spectrum superintelligence: in essence, an organic Singularity. And in contrast to classical digital zombies, superficially small molecular differences in biological minds can result in profoundly different state-spaces of sentience. Compare the ostensibly trivial difference in gene expression profiles of neurons mediating phenomenal sight and phenomenal sound – and the radically different visual and auditory worlds they yield.

Compared to FUSION or REPLACEMENT scenarios, the AI-human CO-EVOLUTION conjecture is apt to sound tame. The likelihood our posthuman successors will also be our biological descendants suggests at most a radical conservativism. In reality, a post-Singularity future where today’s classical digital zombies were superseded merely by faster, more versatile classical digital zombies would be infinitely duller than a future of full-spectrum supersentience. For all insentient information processors are exactly the same inasmuch as the living dead are not subjects of experience. They’ll never even know what it’s like to be “all dark inside” – or the computational power of phenomenal object-binding that yields illumination. By contrast, posthuman superintelligence will not just be quantitatively greater but also qualitatively alien to archaic Darwinian minds. Cybernetically enhanced and genetically rewritten biological minds can abolish suffering throughout the living world and banish experience below “hedonic zero” in our forward light-cone, an ethical watershed without precedent. Post-Darwinian life can enjoy gradients of lifelong blissful supersentience with the intensity of a supernova compared to a glow-worm. A zombie, on the other hand, is just a zombie – even if it squawks like Einstein. Posthuman organic minds will dwell in state-spaces of experience for which archaic humans and classical digital computers alike have no language, no concepts, and no words to describe our ignorance. Most radically, hyperintelligent organic minds will explore state-spaces of consciousness that do not currently play any information-signalling role in living organisms, and are impenetrable to investigation by digital zombies. In short, biological intelligence is on the brink of a recursively self-amplifying Qualia Explosion – a phenomenon of which digital zombies are invincibly ignorant, and invincibly ignorant of their own ignorance. Humans too of course are mostly ignorant of what we’re lacking: the nature, scope and intensity of such posthuman superqualia are beyond the bounds of archaic human experience. Even so, enrichment of our reward pathways can ensure that full-spectrum biological superintelligence will be sublime.


Image Credit: MohammadReza DomiriGanji

Everything in a Nutshell

David Pearce at Quora in response to the question: “What are your philosophical positions in one paragraph?“:

“Everyone takes the limits of his own vision for the limits of the world.”
(Schopenhauer)

All that matters is the pleasure-pain axis. Pain and pleasure disclose the world’s inbuilt metric of (dis)value. Our overriding ethical obligation is to minimise suffering. After we have reprogrammed the biosphere to wipe out experience below “hedonic zero”, we should build a “triple S” civilisation based on gradients of superhuman bliss. The nature of ultimate reality baffles me. But intelligent moral agents will need to understand the multiverse if we are to grasp the nature and scope of our wider cosmological responsibilities. My working assumption is non-materialist physicalism. Formally, the world is completely described by the equation(s) of physics, presumably a relativistic analogue of the universal Schrödinger equation. Tentatively, I’m a wavefunction monist who believes we are patterns of qualia in a high-dimensional complex Hilbert space. Experience discloses the intrinsic nature of the physical: the “fire” in the equations. The solutions to the equations of QFT or its generalisation yield the values of qualia. What makes biological minds distinctive, in my view, isn’t subjective experience per se, but rather non-psychotic binding. Phenomenal binding is what consciousness is evolutionarily “for”. Without the superposition principle of QM, our minds wouldn’t be able to simulate fitness-relevant patterns in the local environment. When awake, we are quantum minds running subjectively classical world-simulations. I am an inferential realist about perception. Metaphysically, I explore a zero ontology: the total information content of reality must be zero on pain of a miraculous creation of information ex nihilo. Epistemologically, I incline to a radical scepticism that would be sterile to articulate. Alas, the history of philosophy twinned with the principle of mediocrity suggests I burble as much nonsense as everyone else.


Image credit: Joseph Matthias Young

Why I think the Foundational Research Institute should rethink its approach

by Mike Johnson

The following is my considered evaluation of the Foundational Research Institute, circa July 2017. I discuss its goal, where I foresee things going wrong with how it defines suffering, and what it could do to avoid these problems.

TL;DR version: functionalism (“consciousness is the sum-total of the functional properties of our brains”) sounds a lot better than it actually turns out to be in practice. In particular, functionalism makes it impossible to define ethics & suffering in a way that can mediate disagreements.

I. What is the Foundational Research Institute?

The Foundational Research Institute (FRI) is a Berlin-based group that “conducts research on how to best reduce the suffering of sentient beings in the near and far future.” Executive Director Max Daniel introduced them at EA Global Boston as “the only EA organization which at an organizational level has the mission of focusing on reducing s-risk.” S-risks are, according to Daniel, “risks where an adverse outcome would bring about suffering on an astronomical scale, vastly exceeding all suffering that has existed on Earth so far.”

Essentially, FRI wants to become the research arm of suffering-focused ethics, and help prevent artificial general intelligence (AGI) failure-modes which might produce suffering on a cosmic scale.

What I like about FRI:

While I have serious qualms about FRI’s research framework, I think the people behind FRI deserve a lot of credit- they seem to be serious people, working hard to build something good. In particular, I want to give them a shoutout for three things:

  • First, FRI takes suffering seriously, and I think that’s important. When times are good, we tend to forget how tongue-chewingly horrific suffering can be. S-risks seem particularly horrifying.
  • Second, FRI isn’t afraid of being weird. FRI has been working on s-risk research for a few years now, and if people are starting to come around to the idea that s-risks are worth thinking about, much of the credit goes to FRI.
  • Third, I have great personal respect for Brian Tomasik, one of FRI’s co-founders. I’ve found him highly thoughtful, generous in debates, and unfailingly principled. In particular, he’s always willing to bite the bullet and work ideas out to their logical end, even if it involves repugnant conclusions.

What is FRI’s research framework?

FRI believes in analytic functionalism, or what David Chalmers calls “Type-A materialism”. Essentially, what this means is there’s no ’theoretical essence’ to consciousness; rather, consciousness is the sum-total of the functional properties of our brains. Since ‘functional properties’ are rather vague, this means consciousness itself is rather vague, in the same way words like “life,” “justice,” and “virtue” are messy and vague.

Brian suggests that this vagueness means there’s an inherently subjective, perhaps arbitrary element to how we define consciousness:

Analytic functionalism looks for functional processes in the brain that roughly capture what we mean by words like “awareness”, “happy”, etc., in a similar way as a biologist may look for precise properties of replicators that roughly capture what we mean by “life”. Just as there can be room for fuzziness about where exactly to draw the boundaries around “life”, different analytic functionalists may have different opinions about where to define the boundaries of “consciousness” and other mental states. This is why consciousness is “up to us to define”. There’s no hard problem of consciousness for the same reason there’s no hard problem of life: consciousness is just a high-level word that we use to refer to lots of detailed processes, and it doesn’t mean anything in addition to those processes.

Finally, Brian argues that the phenomenology of consciousness is identical with the phenomenology of computation:

I know that I’m conscious. I also know, from neuroscience combined with Occam’s razor, that my consciousness consists only of material operations in my brain — probably mostly patterns of neuronal firing that help process inputs, compute intermediate ideas, and produce behavioral outputs. Thus, I can see that consciousness is just the first-person view of certain kinds of computations — as Eliezer Yudkowsky puts it, “How An Algorithm Feels From Inside“. Consciousness is not something separate from or epiphenomenal to these computations. It is these computations, just from their own perspective of trying to think about themselves.

 

In other words, consciousness is what minds compute. Consciousness is the collection of input operations, intermediate processing, and output behaviors that an entity performs.

And if consciousness is all these things, so too is suffering. Which means suffering is computational, yet also inherently fuzzy, and at least a bit arbitrary; a leaky high-level reification impossible to speak about accurately, since there’s no formal, objective “ground truth”.

II. Why do I worry about FRI’s research framework?

In short, I think FRI has a worthy goal and good people, but its metaphysics actively prevent making progress toward that goal. The following describes why I think that, drawing heavily on Brian’s writings (of FRI’s researchers, Brian seems the most focused on metaphysics):

Note: FRI is not the only EA organization which holds functionalist views on consciousness; much of the following critique would also apply to e.g. MIRI, FHI, and OpenPhil. I focus on FRI because (1) Brian’s writings on consciousness & functionalism have been hugely influential in the community, and are clear enough *to* criticize; (2) the fact that FRI is particularly clear about what it cares about- suffering- allows a particularly clear critique about what problems it will run into with functionalism; (3) I believe FRI is at the forefront of an important cause area which has not crystallized yet, and I think it’s critically important to get these objections bouncing around this subcommunity.

Objection 1: Motte-and-bailey

Brian: “Consciousness is not a thing which exists ‘out there’ or even a separate property of matter; it’s a definitional category into which we classify minds. ‘Is this digital mind really conscious?’ is analogous to ‘Is a rock that people use to eat on really a table?’ [However,] That consciousness is a cluster in thingspace rather than a concrete property of the world does not make reducing suffering less important.”

The FRI model seems to imply that suffering is ineffable enough such that we can’t have an objective definition, yet sufficiently effable that we can coherently talk and care about it. This attempt to have it both ways seems contradictory, or at least in deep tension.

Indeed, I’d argue that the degree to which you can care about something is proportional to the degree to which you can define it objectively. E.g., If I say that “gnireffus” is literally the most terrible thing in the cosmos, that we should spread gnireffus-focused ethics, and that minimizing g-risks (far-future scenarios which involve large amounts of gnireffus) is a moral imperative, but also that what is and what and isn’t gnireffus is rather subjective with no privileged definition, and it’s impossible to objectively tell if a physical system exhibits gnireffus, you might raise any number of objections. This is not an exact metaphor for FRI’s position, but I worry that FRI’s work leans on the intuition that suffering is real and we can speak coherently about it, to a degree greater than its metaphysics formally allow.

Max Daniel (personal communication) suggests that we’re comfortable with a degree of ineffability in other contexts; “Brian claims that the concept of suffering shares the allegedly problematic properties with the concept of a table. But it seems a stretch to say that the alleged tension is problematic when talking about tables. So why would it be problematic when talking about suffering?” However, if we take the anti-realist view that suffering is ‘merely’ a node in the network of language, we have to live with the consequences of this: that ‘suffering’ will lose meaning as we take it away from the network in which it’s embedded (Wittgenstein). But FRI wants to do exactly this, to speak about suffering in the context of AGIs, simulated brains, even video game characters.

We can be anti-realists about suffering (suffering-is-a-node-in-the-network-of-language), or we can argue that we can talk coherently about suffering in novel contexts (AGIs, mind crime, aliens, and so on), but it seems inherently troublesome to claim we can do both at the same time.

Objection 2: Intuition duels

Two people can agree on FRI’s position that there is no objective fact of the matter about what suffering is (no privileged definition), but this also means they have no way of coming to any consensus on the object-level question of whether something can suffer. This isn’t just an academic point: Brian has written extensively about how he believes non-human animals can and do suffer extensively, whereas Yudkowsky (who holds computationalist views, like Brian) has written about how he’s confident that animals are not conscious and cannot suffer, due to their lack of higher-order reasoning.

And if functionalism is having trouble adjudicating the easy cases of suffering–whether monkeys can suffer, or whether dogs can— it doesn’t have a sliver of a chance at dealing with the upcoming hard cases of suffering: whether a given AGI is suffering, or engaging in mind crime; whether a whole-brain emulation (WBE) or synthetic organism or emergent intelligence that doesn’t have the capacity to tell us how it feels (or that we don’t have the capacity to understand) is suffering; if any aliens that we meet in the future can suffer; whether changing the internal architecture of our qualia reports means we’re also changing our qualia; and so on.

In short, FRI’s theory of consciousness isn’t actually a theory of consciousness at all, since it doesn’t do the thing we need a theory of consciousness to do: adjudicate disagreements in a principled way. Instead, it gives up any claim on the sorts of objective facts which could in principle adjudicate disagreements.

This is a source of friction in EA today, but it’s mitigated by the sense that

(1) The EA pie is growing, so it’s better to ignore disagreements than pick fights;

(2) Disagreements over the definition of suffering don’t really matter yet, since we haven’t gotten into the business of making morally-relevant synthetic beings (that we know of) that might be unable to vocalize their suffering.

If the perception of one or both of these conditions change, the lack of some disagreement-adjudicating theory of suffering will matter quite a lot.

Objection 3: Convergence requires common truth

Mike: “[W]hat makes one definition of consciousness better than another? How should we evaluate them?”

Brian: “Consilience among our feelings of empathy, principles of non-discrimination, understandings of cognitive science, etc. It’s similar to the question of what makes one definition of justice or virtue better than another.”

Brian is hoping that affective neuroscience will slowly converge to accurate views on suffering as more and better data about sentience and pain accumulates. But convergence to truth implies something (objective) driving the convergence- in this way, Brian’s framework still seems to require an objective truth of the matter, even though he disclaims most of the benefits of assuming this.

Objection 4: Assuming that consciousness is a reification produces more confusion, not less

Brian: “Consciousness is not a reified thing; it’s not a physical property of the universe that just exists intrinsically. Rather, instances of consciousness are algorithms that are implemented in specific steps. … Consciousness involves specific things that brains do.”

Brian argues that we treat conscious/phenomenology as more ‘real’ than it is. Traditionally, whenever we’ve discovered something is a leaky reification and shouldn’t be treated as ‘too real’, we’ve been able to break it down into more coherent constituent pieces we can treat as real. Life, for instance, wasn’t due to élan vital but a bundle of self-organizing properties & dynamics which generally co-occur. But carrying out this “de-reification” process on consciousness– enumerating its coherent constituent pieces– has proven difficult, especially if we want to preserve some way to speak cogently about suffering.

Speaking for myself, the more I stared into the depths of functionalism, the less certain everything about moral value became– and arguably, I see the same trajectory in Brian’s work and Luke Muehlhauser’s report. Their model uncertainty has seemingly become larger as they’ve looked into techniques for how to “de-reify” consciousness while preserving some flavor of moral value, not smaller. Brian and Luke seem to interpret this as evidence that moral value is intractably complicated, but this is also consistent with consciousness not being a reification, and instead being a real thing. Trying to “de-reify” something that’s not a reification will produce deep confusion, just as surely trying to treat a reification as ‘more real’ than it actually is will.

Edsger W. Dijkstra famously noted that “The purpose of abstraction is not to be vague, but to create a new semantic level in which one can be absolutely precise.” And so if our ways of talking about moral value fail to ‘carve reality at the joints’- then by all means let’s build better ones, rather than giving up on precision.

Objection 5: The Hard Problem of Consciousness is a red herring

Brian spends a lot of time discussing Chalmers’ “Hard Problem of Consciousness”, i.e. the question of why we’re subjectively conscious, and seems to base at least part of his conclusion on not finding this question compelling— he suggests “There’s no hard problem of consciousness for the same reason there’s no hard problem of life: consciousness is just a high-level word that we use to refer to lots of detailed processes, and it doesn’t mean anything in addition to those processes.” I.e., no ‘why’ is necessary; when we take consciousness and subtract out the details of the brain, we’re left with an empty set.

But I think the “Hard Problem” isn’t helpful as a contrastive centerpiece, since it’s unclear what the problem is, and whether it’s analytic or empirical, a statement about cognition or about physics. At the Qualia Research Institute (QRI), we don’t talk much about the Hard Problem; instead, we talk about Qualia Formalism, or the idea that any phenomenological state can be crisply and precisely represented by some mathematical object. I suspect this would be a better foil for Brian’s work than the Hard Problem.

Objection 6: Mapping to reality

Brian argues that consciousness should be defined at the functional/computational level: given a Turing machine, or neural network, the right ‘code’ will produce consciousness. But the problem is that this doesn’t lead to a theory which can ‘compile’ to physics. Consider the following:

Imagine you have a bag of popcorn. Now shake it. There will exist a certain ad-hoc interpretation of bag-of-popcorn-as-computational-system where you just simulated someone getting tortured, and other interpretations that don’t imply that. Did you torture anyone? If you’re a computationalist, no clear answer exists- you both did, and did not, torture someone. This sounds like a ridiculous edge-case that would never come up in real life, but in reality it comes up all the time, since there is no principled way to *objectively derive* what computation(s) any physical system is performing.

I don’t think this is an outlandish view of functionalism; Brian suggests much the same in How to Interpret a Physical System as a Mind“Physicalist views that directly map from physics to moral value are relatively simple to understand. Functionalism is more complex, because it maps from physics to computations to moral value. Moreover, while physics is real and objective, computations are fictional and ‘observer-relative’ (to use John Searle’s terminology). There’s no objective meaning to ‘the computation that this physical system is implementing’ (unless you’re referring to the specific equations of physics that the system is playing out).”

Gordon McCabe (McCabe 2004) provides a more formal argument to this effect— that precisely mapping between physical processes and (Turing-level) computational processes is inherently impossible— in the context of simulations. First, McCabe notes that:

[T]here is a one-[to-]many correspondence between the logical states [of a computer] and the exact electronic states of computer memory. Although there are bijective mappings between numbers and the logical states of computer memory, there are no bijective mappings between numbers and the exact electronic states of memory.

This lack of an exact bijective mapping means that subjective interpretation necessarily creeps in, and so a computational simulation of a physical system can’t be ‘about’ that system in any rigorous way:

In a computer simulation, the values of the physical quantities possessed by the simulated system are represented by the combined states of multiple bits in computer memory. However, the combined states of multiple bits in computer memory only represent numbers because they are deemed to do so under a numeric interpretation. There are many different interpretations of the combined states of multiple bits in computer memory. If the numbers represented by a digital computer are interpretation-dependent, they cannot be objective physical properties. Hence, there can be no objective relationship between the changing pattern of multiple bit-states in computer memory, and the changing pattern of quantity-values of a simulated physical system.

McCabe concludes that, metaphysically speaking,

A digital computer simulation of a physical system cannot exist as, (does not possess the properties and relationships of), anything else other than a physical process occurring upon the components of a computer. In the contemporary case of an electronic digital computer, a simulation cannot exist as anything else other than an electronic physical process occurring upon the components and circuitry of a computer.

Where does this leave ethics? In Flavors of Computation Are Flavors of Consciousness, Brian notes that “In some sense all I’ve proposed here is to think of different flavors of computation as being various flavors of consciousness. But this still leaves the question: Which flavors of computation matter most? Clearly whatever computations happen when a person is in pain are vastly more important than what’s happening in a brain on a lazy afternoon. How can we capture that difference?”

But if Brian grants the former point- that “There’s no objective meaning to ‘the computation that this physical system is implementing’”– then this latter task of figuring out “which flavors of computation matter most” is provably impossible. There will always be multiple computational (and thus ethical) interpretations of a physical system, with no way to figure out what’s “really” happening. No way to figure out if something is suffering or not. No consilience; not now, not ever.

Note: despite apparently granting the point above, Brian also remarks that:

I should add a note on terminology: All computations occur within physics, so any computation is a physical process. Conversely, any physical process proceeds from input conditions to output conditions in a regular manner and so is a computation. Hence, the set of computations equals the set of physical processes, and where I say “computations” in this piece, one could just as well substitute “physical processes” instead.

This seems to be (1) incorrect, for the reasons I give above, or (2) taking substantial poetic license with these terms, or (3) referring to hypercomputation (which might be able to salvage the metaphor, but would invalidate many of FRI’s conclusions dealing with the computability of suffering on conventional hardware).

This objection may seem esoteric or pedantic, but I think it’s important, and that it ripples through FRI’s theoretical framework with disastrous effects.

 

Objection 7: FRI doesn’t fully bite the bullet on computationalism

Brian suggests that “flavors of computation are flavors of consciousness” and that some computations ‘code’ for suffering. But if we do in fact bite the bullet on this metaphor and place suffering within the realm of computational theory, we need to think in “near mode” and accept all the paradoxes that brings. Scott Aaronson, a noted expert on quantum computing, raises the following objections to functionalism:

I’m guessing that many people in this room side with Dennett, and (not coincidentally, I’d say) also with Everett. I certainly have sympathies in that direction too. In fact, I spent seven or eight years of my life as a Dennett/Everett hardcore believer. But, while I don’t want to talk anyone out of the Dennett/Everett view, I’d like to take you on a tour of what I see as some of the extremely interesting questions that that view leaves unanswered. I’m not talking about “deep questions of meaning,” but about something much more straightforward: what exactly does a computational process have to do to qualify as “conscious”?

 

 

There’s this old chestnut, what if each person on earth simulated one neuron of your brain, by passing pieces of paper around. It took them several years just to simulate a single second of your thought processes. Would that bring your subjectivity into being? Would you accept it as a replacement for your current body? If so, then what if your brain were simulated, not neuron-by-neuron, but by a gigantic lookup table? That is, what if there were a huge database, much larger than the observable universe (but let’s not worry about that), that hardwired what your brain’s response was to every sequence of stimuli that your sense-organs could possibly receive. Would that bring about your consciousness? Let’s keep pushing: if it would, would it make a difference if anyone actually consulted the lookup table? Why can’t it bring about your consciousness just by sitting there doing nothing?

To these standard thought experiments, we can add more. Let’s suppose that, purely for error-correction purposes, the computer that’s simulating your brain runs the code three times, and takes the majority vote of the outcomes. Would that bring three “copies” of your consciousness into being? Does it make a difference if the three copies are widely separated in space or time—say, on different planets, or in different centuries? Is it possible that the massive redundancy taking place in your brain right now is bringing multiple copies of you into being?

 

 

Maybe my favorite thought experiment along these lines was invented by my former student Andy Drucker.  In the past five years, there’s been a revolution in theoretical cryptography, around something called Fully Homomorphic Encryption (FHE), which was first discovered by Craig Gentry.  What FHE lets you do is to perform arbitrary computations on encrypted data, without ever decrypting the data at any point.  So, to someone with the decryption key, you could be proving theorems, simulating planetary motions, etc.  But to someone without the key, it looks for all the world like you’re just shuffling random strings and producing other random strings as output.

 

You can probably see where this is going.  What if we homomorphically encrypted a simulation of your brain?  And what if we hid the only copy of the decryption key, let’s say in another galaxy?  Would this computation—which looks to anyone in our galaxy like a reshuffling of gobbledygook—be silently producing your consciousness?

 

When we consider the possibility of a conscious quantum computer, in some sense we inherit all the previous puzzles about conscious classical computers, but then also add a few new ones.  So, let’s say I run a quantum subroutine that simulates your brain, by applying some unitary transformation U.  But then, of course, I want to “uncompute” to get rid of garbage (and thereby enable interference between different branches), so I apply U-1.  Question: when I apply U-1, does your simulated brain experience the same thoughts and feelings a second time?  Is the second experience “the same as” the first, or does it differ somehow, by virtue of being reversed in time? Or, since U-1U is just a convoluted implementation of the identity function, are there no experiences at all here?

 

Here’s a better one: many of you have heard of the Vaidman bomb.  This is a famous thought experiment in quantum mechanics where there’s a package, and we’d like to “query” it to find out whether it contains a bomb—but if we query it and there is a bomb, it will explode, killing everyone in the room.  What’s the solution?  Well, suppose we could go into a superposition of querying the bomb and not querying it, with only ε amplitude on querying the bomb, and √(1-ε2) amplitude on not querying it.  And suppose we repeat this over and over—each time, moving ε amplitude onto the “query the bomb” state if there’s no bomb there, but moving ε2 probability onto the “query the bomb” state if there is a bomb (since the explosion decoheres the superposition).  Then after 1/ε repetitions, we’ll have order 1 probability of being in the “query the bomb” state if there’s no bomb.  By contrast, if there is a bomb, then the total probability we’ve ever entered that state is (1/ε)×ε2 = ε.  So, either way, we learn whether there’s a bomb, and the probability that we set the bomb off can be made arbitrarily small.  (Incidentally, this is extremely closely related to how Grover’s algorithm works.)

 

OK, now how about the Vaidman brain?  We’ve got a quantum subroutine simulating your brain, and we want to ask it a yes-or-no question.  We do so by querying that subroutine with ε amplitude 1/ε times, in such a way that if your answer is “yes,” then we’ve only ever activated the subroutine with total probability ε.  Yet you still manage to communicate your “yes” answer to the outside world.  So, should we say that you were conscious only in the ε fraction of the wavefunction where the simulation happened, or that the entire system was conscious?  (The answer could matter a lot for anthropic purposes.)

To sum up: Brian’s notion that consciousness is the same as computation raises more issues than it solves; in particular, the possibility that if suffering is computable, it may also be uncomputable/reversible, would suggest s-risks aren’t as serious as FRI treats them.

Objection 8: Dangerous combination

Three themes which seem to permeate FRI’s research are:

(1) Suffering is the thing that is bad.

(2) It’s critically important to eliminate badness from the universe.

(3) Suffering is impossible to define objectively, and so we each must define what suffering means for ourselves.

Taken individually, each of these seems reasonable. Pick two, and you’re still okay. Pick all three, though, and you get A Fully General Justification For Anything, based on what is ultimately a subjective/aesthetic call.

Much can be said in FRI’s defense here, and it’s unfair to single them out as risky: in my experience they’ve always brought a very thoughtful, measured, cooperative approach to the table. I would just note that ideas are powerful, and I think theme (3) is especially pernicious if incorrect.

III. QRI’s alternative

Analytic functionalism is essentially a negative hypothesis about consciousness: it’s the argument that there’s no order to be found, no rigor to be had. It obscures this with talk of “function”, which is a red herring it not only doesn’t define, but admits is undefinable. It doesn’t make any positive assertion. Functionalism is skepticism- nothing more, nothing less.

But is it right?

Ultimately, I think these a priori arguments are much like people in the middle ages arguing whether one could ever formalize a Proper System of Alchemy. Such arguments may in many cases hold water, but it’s often difficult to tell good arguments apart from arguments where we’re just cleverly fooling ourselves. In retrospect, the best way to *prove* systematized alchemy was possible was to just go out and *do* it, and invent Chemistry. That’s how I see what we’re doing at QRI with Qualia Formalism: we’re assuming it’s possible to build stuff, and we’re working on building the object-level stuff.

What we’ve built with QRI’s framework

Note: this is a brief, surface-level tour of our research; it will probably be confusing for readers who haven’t dug into our stuff before. Consider this a down-payment on a more substantial introduction.

My most notable work is Principia Qualia, in which I lay out my meta-framework for consciousness (a flavor of dual-aspect monism, with a focus on Qualia Formalism) and put forth the Symmetry Theory of Valence (STV). Essentially, the STV is an argument that much of the apparent complexity of emotional valence is evolutionarily contingent, and if we consider a mathematical object isomorphic to a phenomenological experience, the mathematical property which corresponds to how pleasant it is to be that experience is the object’s symmetry. This implies a bunch of testable predictions and reinterpretations of things like what ‘pleasure centers’ do (Section XI; Section XII). Building on this, I offer the Symmetry Theory of Homeostatic Regulation, which suggests understanding the structure of qualia will translate into knowledge about the structure of human intelligence, and I briefly touch on the idea of Neuroacoustics.

Likewise, my colleague Andrés Gómez Emilsson has written about the likely mathematics of phenomenology, including The Hyperbolic Geometry of DMT Experiences, Tyranny of the Intentional Object, and Algorithmic Reduction of Psychedelic States. If I had to suggest one thing to read in all of these links, though, it would be the transcript of his recent talk on Quantifying Bliss, which lays out the world’s first method to objectively measure valence from first principles (via fMRI) using Selen Atasoy’s Connectome Harmonics framework, the Symmetry Theory of Valence, and Andrés’s CDNS model of experience.

These are risky predictions and we don’t yet know if they’re right, but we’re confident that if there is some elegant structure intrinsic to consciousness, as there is in many other parts of the natural world, these are the right kind of risks to take.

I mention all this because I think analytic functionalism- which is to say radical skepticism/eliminativism, the metaphysics of last resort- only looks as good as it does because nobody’s been building out any alternatives.

IV. Closing thoughts

FRI is pursuing a certain research agenda, and QRI is pursuing another, and there’s lots of value in independent explorations of the nature of suffering. I’m glad FRI exists, everybody I’ve interacted with at FRI has been great, I’m happy they’re focusing on s-risks, and I look forward to seeing what they produce in the future.

On the other hand, I worry that nobody’s pushing back on FRI’s metaphysics, which seem to unavoidably lead to the intractable problems I describe above. FRI seems to believe these problems are part of the territory, unavoidable messes that we just have to make philosophical peace with. But I think that functionalism is a bad map, that the metaphysical messes it leads to are much worse than most people realize (fatal to FRI’s mission), and there are other options that avoid these problems (which, to be fair, is not to say they have no problems).

Ultimately, FRI doesn’t owe me a defense of their position. But if they’re open to suggestions on what it would take to convince a skeptic like me that their brand of functionalism is viable, or at least rescuable, I’d offer the following:

Re: Objection 1 (motte-and-bailey), I suggest FRI should be as clear and complete as possible in their basic definition of suffering. In which particular ways is it ineffable/fuzzy, and in which particular ways is it precise? What can we definitely say about suffering, and what can we definitely never determine? Preregistering ontological commitments and methodological possibilities would help guard against FRI’s definition of suffering changing based on context.

Re: Objection 2 (intuition duels), FRI may want to internally “war game” various future scenarios involving AGI, WBE, etc, with one side arguing that a given synthetic (or even extraterrestrial) organism is suffering, and the other side arguing that it isn’t. I’d expect this would help diagnose what sorts of disagreements future theories of suffering will need to adjudicate, and perhaps illuminate implicit ethical intuitions. Sharing the results of these simulated disagreements would also be helpful in making FRI’s reasoning less opaque to outsiders, although making everything transparent could lead to certain strategic disadvantages.

Re: Objection 3 (convergence requires common truth), I’d like FRI to explore exactly what might drive consilience/convergence in theories of suffering, and what precisely makes one theory of suffering better than another, and ideally to evaluate a range of example theories of suffering under these criteria.

Re: Objection 4 (assuming that consciousness is a reification produces more confusion, not less), I would love to see a historical treatment of reification: lists of reifications which were later dissolved (e.g., élan vital), vs scattered phenomena that were later unified (e.g., electromagnetism). What patterns do the former have, vs the latter, and why might consciousness fit one of these buckets better than the other?

Re: Objection 5 (the Hard Problem of Consciousness is a red herring), I’d like to see a more detailed treatment of what kinds of problem people have interpreted the Hard Problem as, and also more analysis on the prospects of Qualia Formalism (which I think is the maximally-empirical, maximally-charitable interpretation of the Hard Problem). It would be helpful for us, in particular, if FRI preregistered their expectations about QRI’s predictions, and their view of the relative evidence strength of each of our predictions.

Re: Objection 6 (mapping to reality), this is perhaps the heart of most of our disagreement. From Brian’s quotes, he seems split on this issue; I’d like clarification about whether he believes we can ever precisely/objectively map specific computations to specific physical systems, and vice-versa. And if so— how? If not, this seems to propagate through FRI’s ethical framework in a disastrous way, since anyone can argue that any physical system does, or does not, ‘code’ for massive suffering, and there’s no principled way to derive any ‘ground truth’ or even pick between interpretations in a principled way (e.g. my popcorn example). If this isn’t the case— why not?

Brian has suggested that “certain high-level interpretations of physical systems are more ‘natural’ and useful than others” (personal communication); I agree, and would encourage FRI to explore systematizing this.

It would be non-trivial to port FRI’s theories and computational intuitions to the framework of “hypercomputation”– i.e., the understanding that there’s a formal hierarchy of computational systems, and that Turing machines are only one level of many– but it may have benefits too. Namely, it might be the only way they could avoid Objection 6 (which I think is a fatal objection) while still allowing them to speak about computation & consciousness in the same breath. I think FRI should look at this and see if it makes sense to them.

Re: Objection 7 (FRI doesn’t fully bite the bullet on computationalism), I’d like to see responses to Aaronson’s aforementioned thought experiments.

Re: Objection 8 (dangerous combination), I’d like to see a clarification about why my interpretation is unreasonable (as it very well may be!).

 


In conclusion- I think FRI has a critically important goal- reduction of suffering & s-risk. However, I also think FRI has painted itself into a corner by explicitly disallowing a clear, disagreement-mediating definition for what these things are. I look forward to further work in this field.

 

Mike Johnson

Qualia Research Institute


Acknowledgements: thanks to Andrés Gómez Emilsson, Brian Tomasik, and Max Daniel for reviewing earlier drafts of this.

Sources:

My sources for FRI’s views on consciousness:
Flavors of Computation are Flavors of Consciousness:
https://foundational-research.org/flavors-of-computation-are-flavors-of-consciousness/
Is There a Hard Problem of Consciousness?
http://reducing-suffering.org/hard-problem-consciousness/
Consciousness Is a Process, Not a Moment
http://reducing-suffering.org/consciousness-is-a-process-not-a-moment/
How to Interpret a Physical System as a Mind
http://reducing-suffering.org/interpret-physical-system-mind/
Dissolving Confusion about Consciousness
http://reducing-suffering.org/dissolving-confusion-about-consciousness/
Debate between Brian & Mike on consciousness:
https://www.facebook.com/groups/effective.altruists/permalink/1333798200009867/?comment_id=1333823816673972&comment_tracking=%7B%22tn%22%3A%22R9%22%7D
Max Daniel’s EA Global Boston 2017 talk on s-risks:
https://www.youtube.com/watch?v=jiZxEJcFExc
Multipolar debate between Eliezer Yudkowsky and various rationalists about animal suffering:
https://rationalconspiracy.com/2015/12/16/a-debate-on-animal-consciousness/
The Internet Encyclopedia of Philosophy on functionalism:
http://www.iep.utm.edu/functism/
Gordon McCabe on why computation doesn’t map to physics:
http://philsci-archive.pitt.edu/1891/1/UniverseCreationComputer.pdf
Toby Ord on hypercomputation, and how it differs from Turing’s work:
https://arxiv.org/abs/math/0209332
Luke Muehlhauser’s OpenPhil-funded report on consciousness and moral patienthood:
http://www.openphilanthropy.org/2017-report-consciousness-and-moral-patienthood
Scott Aaronson’s thought experiments on computationalism:
http://www.scottaaronson.com/blog/?p=1951
Selen Atasoy on Connectome Harmonics, a new way to understand brain activity:
https://www.nature.com/articles/ncomms10340
My work on formalizing phenomenology:
My meta-framework for consciousness, including the Symmetry Theory of Valence:
http://opentheory.net/PrincipiaQualia.pdf
My hypothesis of homeostatic regulation, which touches on why we seek out pleasure:
http://opentheory.net/2017/05/why-we-seek-out-pleasure-the-symmetry-theory-of-homeostatic-regulation/
My exploration & parametrization of the ‘neuroacoustics’ metaphor suggested by Atasoy’s work:
http://opentheory.net/2017/06/taking-brain-waves-seriously-neuroacoustics/
My colleague Andrés’s work on formalizing phenomenology:
A model of DMT-trip-as-hyperbolic-experience:
https://qualiacomputing.com/2017/05/28/eli5-the-hyperbolic-geometry-of-dmt-experiences/
June 2017 talk at Consciousness Hacking, describing a theory and experiment to predict people’s valence from fMRI data:
https://qualiacomputing.com/2017/06/18/quantifying-bliss-talk-summary/
A parametrization of various psychedelic states as operators in qualia space:
https://qualiacomputing.com/2016/06/20/algorithmic-reduction-of-psychedelic-states/
A brief post on valence and the fundamental attribution error:
https://qualiacomputing.com/2016/11/19/the-tyranny-of-the-intentional-object/
A summary of some of Selen Atasoy’s current work on Connectome Harmonics:
https://qualiacomputing.com/2017/06/18/connectome-specific-harmonic-waves-on-lsd/

The Forces At Work

        Recreational agents which are legal and socially sanctioned by respectable society aren’t, of course, popularly viewed as drugs at all. The nicotine addict and the alcoholic don’t think of themselves as practising psychopharmacologists; and so alas their incompetence is frequently lethal.

        Is such incompetence curable? If it is, and if the abolitionist project can be carried forward with pharmacotherapy in advance of true genetic medicine, then a number of preconditions must first be in place. A necessary and sufficient set could not possibly be listed here. It is still worth isolating and examining below several distinct yet convergent societal trends of huge potential significance.

  1. First, it must be assumed that we will continue to seek out and use chemical mood-enhancers on a massive, species-wide scale.
  2. Second, a pioneering and pharmacologically (semi-)literate elite will progressively learn to use their agents of choice in a much more effective, safe and rational manner. The whole pharmacopoeia of licensed and unlicensed medicines will be purchasable globally over the Net. As the operation of our 30,000 plus genes is unravelled, the new discipline of pharmacogenomics will allow drugs to be personally tailored to the genetic makeup of each individual. Better still, desirable states of consciousness that can be induced pharmacologically can later be pre-coded genetically.
  3. Third, society will continue to fund and support research into genetic engineering, reproductive medicine and all forms of biotechnology. This will enable the breathtaking array of designer-heavens on offer from third-millennium biomedicine to become a lifestyle choice.
  4. Fourth, the ill-fated governmental War On (some) Drugs will finally collapse under the weight of its own contradictions. Parents are surely right to be anxious about many of today’s illegal intoxicants. Yet their toxicity will no more prove a reason to give up the dream of Better Living Through Chemistry than the casualties of early modern medicine are a reason to abandon contemporary medical science for homeopathy.
  5. Fifth, the medicalisation of everyday life, and of the human predicament itself, will continue apace. All manner of currently ill-defined discontents will be medically diagnosed and classified. Our innumerable woes will be given respectable clinical labels. Mass-medicalisation will enable the big drug companies aggressively to extend their lucrative markets in medically-approved psychotropics to a widening clientele. New and improved mood-modulating alleles, and other innovative gene-therapies for mood- and intellect-enrichment, will be patented. They will be brought to market by biotechnology companies eager to cure the psychopathologies of the afflicted; and to maximise profits.
  6. Sixth, in the next few centuries an explosive proliferation of ever-more sophisticated virtual reality software products will enable millions, and then billions, of people to live out their ideal fantasies. Paradoxically, as will be seen, the triumph of sensation-driven wish-fulfilment in immersive VR will also demonstrate the intellectual bankruptcy of our old Peripheralist nostrums of social reform. Unhappiness will persist. The hedonic treadmill can’t succumb to computer software.
  7. Seventh, secularism and individualism will triumph over resurgent Islamic and Christian fundamentalism. An entitlement to lifelong well-being in this world, rather than the next, will take on the status of a basic human right.

         There are quite a few imponderables here. Futurology is not, and predictably will never become, one of the exact sciences. Conceivably, one can postulate, for instance, the global triumph of an anti-scientific theocracy. This might be in the mould of the American religious right; or even some kind of Islamic fundamentalism. Less conceivably, there might be a global victory of tender-minded humanism over the onward march of biotechnical determinism. It is also possible that non-medically-approved drug use could be curtailed, at least for a time, with intrusive personal surveillance technologies and punishments of increasingly draconian severity. Abetted by the latest convulsion of moral panic over Drugs, for example, a repressive totalitarian super-state could institute a regime of universal compulsory blood-tests for banned substances. Enforced “detoxification” in rehabilitation camps for offenders would follow.

        These scenarios and their variants are almost certainly too alarmist. Given a pervasive ethos of individualism, and the worldwide spread of hedonistic consumer-capitalism, then as soon as people discover that there is no biophysical reason on earth why they can’t be as happy as they choose indefinitely, it will be hard to stop more adventurous spirits from exploring that option. Lifelong ecstasy isn’t nearly as bad as it sounds.

David Pearce in The Hedonistic Imperative (chapter 3)

 

Hedonium

Desiring that the universe be turned into Hedonium is the straightforward implication of realizing that everything wants to become music.

The problem is… the world-simulations instantiated by our brains are really good at hiding from us the what-it-is-likeness of peak experiences. Like Buddhist enlightenment, language can only serve as a pointer to the real deal. So how do we use it to point to Hedonium? Here is a list of experiences, concepts and dynamics that (personally) give me at least a sort of intuition pump for what Hedonium might be like. Just remember that it is way beyond any of this:

Positive-sum games, rainbow light, a lover’s everlasting promise of loyalty, hyperbolic harmonics, non-epiphenomenal bliss, life as a game, fractals, children’s laughter, dreamless sleep, the enlightenment of emptiness, loving-kindness directed towards all sentient beings of past, present, and future, temperate wind caressing branches and leaves of trees in a rainforest, perfectly round spheres, visions of a giant ying-yang representing the cosmic balance of energies, Ricci flowtranspersonal experiences, hugging a friend on MDMA, believing in a loving God, paraconsistent logic-transcending Nirvana, the silent conspiracy of essences, eating a meal with every flavor and aroma found in the quantum state-space of qualia, Enya (Caribbean Blue, Orinoco Flow), seeing all the grains of sand in the world at once, funny jokes made of jokes made of jokes made of jokes…, LSD on the beach, becoming lighter-than-air and flying like a balloon, topological non-orientable chocolate-filled cookies, invisible vibrations of love, the source of all existence infinitely reflecting itself in the mirror of self-awareness, super-symmetric experiences, Whitney bottles, Jhana bliss, existential wonder, fully grasping a texture, proving Fermat’s Last theorem, knowing why there is something rather than nothing, having a benevolent social super-intelligence as a friend, a birthday party with all your dead friends, knowing that your family wants the best for you, a vegan Christmas eve, petting your loving dog, the magic you believed in as a kid, being thanked for saving the life of a stranger, Effective Altruism, crying over the beauty and innocence of pandas, letting your parents know that you love them, learning about plant biology, tracing Fibonacci spirals, comprehending cross-validation (the statistical technique that makes statistics worth learning), reading The Hedonistic Imperative by David Pearce, finding someone who can truly understand you, realizing you can give up your addictions, being set free from prison, Time Crystals, figuring out Open Individualism, G/P-spot orgasm, the qualia of existential purpose and meaning, inventing a graph clustering algorithm, rapture, obtaining a new sense, learning to program in Python, empty space without limit extending in all directions, self-aware nothingness, living in the present moment, non-geometric paradoxical universes, impossible colors, the mantra of Avalokiteshvara, clarity of mind, being satisfied with merely being, experiencing vibrating space groups in one’s visual field, toroidal harmonics, Gabriel’s Oboe by Ennio Morricone, having a traditional dinner prepared by your loving grandmother, thinking about existence at its very core: being as apart from essence and presence, interpreting pop songs by replacing the “you” with an Open Individualist eternal self, finding the perfect middle point between female and male energies in a cosmic orgasm of selfless love, and so on.

The Binding Problem

[Our] subjective conscious experience exhibits a unitary and integrated nature that seems fundamentally at odds with the fragmented architecture identified neurophysiologically, an issue which has come to be known as the binding problem. For the objects of perception appear to us not as an assembly of independent features, as might be suggested by a feature based representation, but as an integrated whole, with every component feature appearing in experience in the proper spatial relation to every other feature. This binding occurs across the visual modalities of color, motion, form, and stereoscopic depth, and a similar integration also occurs across the perceptual modalities of vision, hearing, and touch. The question is what kind of neurophysiological explanation could possibly offer a satisfactory account of the phenomenon of binding in perception?
One solution is to propose explicit binding connections, i.e. neurons connected across visual or sensory modalities, whose state of activation encodes the fact that the areas that they connect are currently bound in subjective experience. However this solution merely compounds the problem, for it represents two distinct entities as bound together by adding a third distinct entity. It is a declarative solution, i.e. the binding between elements is supposedly achieved by attaching a label to them that declares that those elements are now bound, instead of actually binding them in some meaningful way.
Von der Malsburg proposes that perceptual binding between cortical neurons is signalled by way of synchronous spiking, the temporal correlation hypothesis (von der Malsburg & Schneider 1986). This concept has found considerable neurophysiological support (Eckhorn et al. 1988, Engel et al. 1990, 1991a, 1991b, Gray et al. 1989, 1990, 1992, Gray & Singer 1989, Stryker 1989). However although these findings are suggestive of some significant computational function in the brain, the temporal correlation hypothesis as proposed, is little different from the binding label solution, the only difference being that the label is defined by a new channel of communication, i.e. by way of synchrony. In information theoretic terms, this is no different than saying that connected neurons posses two separate channels of communication, one to transmit feature detection, and the other to transmit binding information. The fact that one of these channels uses a synchrony code instead of a rate code sheds no light on the essence of the binding problem. Furthermore, as Shadlen & Movshon (1999) observe, the temporal binding hypothesis is not a theory about how binding is computed, but only how binding is signaled, a solution that leaves the most difficult aspect of the problem unresolved.
I propose that the only meaningful solution to the binding problem must involve a real binding, as implied by the metaphorical name. A glue that is supposed to bind two objects together would be most unsatisfactory if it merely labeled the objects as bound. The significant function of glue is to ensure that a force applied to one of the bound objects will automatically act on the other one also, to ensure that the bound objects move together through the world even when one, or both of them are being acted on by forces. In the context of visual perception, this suggests that the perceptual information represented in cortical maps must be coupled to each other with bi-directional functional connections in such a way that perceptual relations detected in one map due to one visual modality will have an immediate effect on the other maps that encode other visual modalities. The one-directional axonal transmission inherent in the concept of the neuron doctrine appears inconsistent with the immediate bi-directional relation required for perceptual binding. Even the feedback pathways between cortical areas are problematic for this function due to the time delay inherent in the concept of spike train integration across the chemical synapse, which would seem to limit the reciprocal coupling between cortical areas to those within a small number of synaptic connections. The time delays across the chemical synapse would seem to preclude the kind of integration apparent in the binding of perception and consciousness across all sensory modalities, which suggests that the entire cortex is functionally coupled to act as a single integrated unit.
— Section 5 of “Harmonic Resonance Theory: An Alternative to the ‘Neuron Doctrine’ Paradigm of Neurocomputation to Address Gestalt properties of perception” by Steven Lehar

Schrödinger’s Neurons: David Pearce at the “2016 Science of Consciousness” conference in Tucson

Abstract:

 

Mankind’s most successful story of the world, natural science, leaves the existence of consciousness wholly unexplained. The phenomenal binding problem deepens the mystery. Neither classical nor quantum physics seem to allow the binding of distributively processed neuronal micro-experiences into unitary experiential objects apprehended by a unitary phenomenal self. This paper argues that if physicalism and the ontological unity of science are to be saved, then we will need to revise our notions of both 1) the intrinsic nature of the physical and 2) the quasi-classicality of neurons. In conjunction, these two hypotheses yield a novel, bizarre but experimentally testable prediction of quantum superpositions (“Schrödinger’s cat states”) of neuronal feature-processors in the CNS at sub-femtosecond timescales. An experimental protocol using in vitro neuronal networks is described to confirm or empirically falsify this conjecture via molecular matter-wave interferometry.

 

For more see: https://www.physicalism.com/

 

(cf. Qualia Computing in Tucson: The Magic Analogy)

 


(Trivia: David Chalmers is one of the attendees of the talk and asks a question at 24:03.)

Beyond Turing: A Solution to the Problem of Other Minds Using Mindmelding and Phenomenal Puzzles

Here is my attempt at providing an experimental protocol to determine whether an entity is conscious.

If you are just looking for the stuffed animal music video skip to 23:28.


Are you the only conscious being in existence? How could we actually test whether other beings have conscious minds?

Turing proposed to test the existence of other minds by measuring their verbal indistinguishability from humans (the famous “Turing Test” asks computers to pretend to be humans and checks if humans buy the impersonations). Others have suggested the solution is as easy as connecting your brain to the brain of the being you want to test.

But these approaches fail for a variety of reasons. Turing tests can be beaten by dream characters and mindmelds might merely work by giving you a “hardware upgrade”. There is no guarantee that the entity tested will be conscious on its own. As pointed out by Brian Tomasik and Eliezer Yudkowsky, even if the information content of your experience increases significantly by mindmelding with another entity, this could still be the result of the entity’s brain working as an exocortex: it is completely unconscious on its own yet capable of enhancing your consciousness.

In order to go beyond these limiting factors, I developed the concept of a “phenomenal puzzle”. These are problems that can only be solved by a conscious being in virtue of requiring inner qualia operations for their solution. For example, a phenomenal puzzle is to arrange qualia values of phenomenal color in a linear map where the metric is based on subjective Just Noticeable Differences.

To conduct the experiment you need:

  1. A phenomenal bridge (e.g. a biological neural network that connects your brain to someone else’s brain so that both brains now instantiate a single consciousness).
  2. A qualia calibrator (a device that allows you to cycle through many combinations of qualia values quickly so that you can compare the sensory-qualia mappings in both brains and generate a shared vocabulary for qualia values).
  3. A phenomenal puzzle (as described above).
  4. The right set and setting: the use of a proper protocol.

Here is an example protocol that works for 4) – though there may be other ones that work as well. Assume that you are person A and you are trying to test if B is conscious:

A) Person A learns about the phenomenal puzzle but is not given enough time to solve it.
B) Person A and B mindmeld using the phenomenal bridge, creating a new being AB.
C) AB tells the phenomenal puzzle to itself (by remembering it from A’s narrative).
D) A and B get disconnected and A is sedated (to prevent A from solving the puzzle).
E) B tries to solve the puzzle on its own (the use of computers not connected to the internet is allowed to facilitate self-experimentation).
F) When B claims to have solved it A and B reconnect into AB.
G) AB then tells the solution to itself so that the records of it in B’s narrative get shared with A’s brain memory.
H) Then A and B get disconnected again and if A is able to provide the answer to the phenomenal puzzle, then B must have been conscious!

To my knowledge, this is the only test of consciousness for which a positive result is impossible (or maybe just extremelly difficult?) to explain unless B is conscious.

Of course B could be conscious but not smart enough to solve the phenomenal puzzle. The test simply guarantees that there will be no false positives. Thus it is not a general test for qualia – but it is a start. At least we can now conceive of a way to know (in principle) whether some entities are conscious (even if we can’t tell that any arbitrary entity is). Still, a positive result would completely negate solipsism, which would undoubtedly be a great philosophical victory.