Both physics and philosophy are jargon-ridden. So let’s first define some key concepts.
Both “consciousness” and “physical” are contested terms. Accurately if inelegantly, consciousness may be described following Nagel (“What is it like to be a bat?”) as the subjective what-it’s-like-ness of experience. Academic philosophers term such self-intimating “raw feels” “qualia” – whether macro-qualia or micro-qualia. The minimum unit of consciousness (or “psychon”, so to speak) has been variously claimed to be the entire universe, a person, a sub-personal neural network, an individual neuron, or the most basic entities recognised by quantum physics. In The Principles of Psychology (1890), American philosopher and psychologist William James christened these phenomenal simples “primordial mind-dust“. This paper conjectures that (1) our minds consist of ultra-rapidly decohering neuronal superpositions in strict accordance with unmodified quantum physics without the mythical “collapse of the wavefunction”; (2) natural selection has harnessed the properties of these neuronal superpositions so our minds run phenomenally-bound world-simulations; and (3) predicts that with enough ingenuity the non-classical interference signature of these conscious neuronal superpositions will be independently experimentally detectable (see 6 below) to the satisfaction of the most incredulous critic.
The “physical” may be contrasted with the supernatural or the abstract and – by dualists and epiphenomenalists, with the mental. The current absence of any satisfactory “positive” definition of the physical leads many philosophers of science to adopt instead the “via negativa“. Thus some materialists have sought stipulatively to define the physical in terms of an absence of phenomenal experience. Such a priori definitions of the nature of the physical are question-begging.
“Physicalism” is sometimes treated as the formalistic claim that the natural world is exhaustively described by the equations of physics and their solutions. Beyond these structural-relational properties of matter and energy, the term “physicalism” is also often used to make an ontological claim about the intrinsic character of whatever the equations describe. This intrinsic character, or metaphysical essence, is typically assumed to be non-phenomenal. “Strawsonian physicalists” (cf. “Consciousness and Its Place in Nature: Does Physicalism Entail Panpsychism?”) dispute any such assumption. Traditional reductive physicalism proposes that the properties of larger entities are determined by properties of their physical parts. If the wavefunction monism of post-Everett quantum mechanics assumed here is true, then the world does not contain discrete physical parts as understood by classical physics.
“Materialism” is the metaphysical doctrine that the world is made of intrinsically non-phenomenal “stuff”. Materialism and physicalism are often treated as cousins and sometimes as mere stylistic variants – with “physicalism” used as a nod to how bosonic fields, for example, are not matter. “Physicalistic materialism” is the claim that physical reality is fundamentally non-experiential and that the natural world is exhaustively described by the equations of physics and their solutions.
“Panpsychism” is the doctrine that the world’s fundamental physical stuff also has primitive experiential properties. Unlike the physicalistic idealism explored here, panpsychism doesn’t claim that the world’s fundamental physical stuff is experiential.
“Epiphenomenalism” in philosophy of mind is the view that experience is caused by material states or events in the brain but does not itself cause anything; the causal efficacy of mental agency is an illusion.
For our purposes, “idealism” is the ontological claim that reality is fundamentally experiential. This use of the term should be distinguished from Berkeleyan idealism, and more generally, from subjective idealism, i.e. the doctrine that only mental contents exist: reality is mind-dependent. One potential source of confusion of contemporary scientific idealism with traditional philosophical idealism is the use by inferential realists in the theory of perception of the term “world-simulation”. The mind-dependence of one’s phenomenal world-simulation, i.e. the quasi-classical world of one’s everyday experience, does not entail the idealist claim that the mind-independent physical world is intrinsically experiential in nature – a far bolder conjecture that we nonetheless tentatively defend here.
“Physicalistic idealism” is the non-materialist physicalist claim that reality is fundamentally experiential and that the natural world is exhaustively described by the equations of physics and their solutions: more specifically, by the continuous, linear, unitary evolution of the universal wavefunction of post-Everett quantum mechanics. The “decoherence program” in contemporary theoretical physics aims to show in a rigorously quantitative manner how quasi-classicality emerges from the unitary dynamics.
“Monism” is the conjecture that reality consists of a single kind of “stuff” – be it material, experiential, spiritual, or whatever. Wavefunction monism is the view that the universal wavefunction mathematically represents, exhaustively, all there is in the world. Strictly speaking, wavefunction monism shouldn’t be construed as the claim that reality literally consists of a certain function, i.e. a mapping from some mind-wrenchingly immense configuration space to the complex numbers, but rather as the claim that every mathematical property of the wavefunction except the overall phase corresponds to some property of physical world. “Dualism”, the conjecture that reality consists of two kinds of “stuff”, comes in many flavours: naturalistic and theological; interactionist and non-interactionist; property and ontological. In the modern era, most scientifically literate monists have been materialists. But to describe oneself as both a physicalist and a monistic idealist is not the schizophrenic word-salad it sounds at first blush.
“Functionalism” in philosophy of mind is the theory that mental states are constituted solely by their functional role, i.e. by their causal relations to other mental states, perceptual inputs, and behavioural outputs. Functionalism is often associated with the idea of “substrate-neutrality”, sometimes misnamed “substrate-independence”, i.e. minds can be realised in multiple substrates and at multiple levels of abstraction. However, micro-functionalists may dispute substrate-neutrality on the grounds that one or more properties of mind, for example phenomenal binding, functionally implicate the world’s quantum-mechanical bedrock from which the quasi-classical worlds of Everett’s multiverse emerge. Thus this paper will argue that only successive quantum-coherent neuronal superpositions at naively preposterously short time-scales can explain phenomenal binding. Without phenomenal binding, no functionally adaptive classical world-simulations could exist in the first instance.
The “binding problem“(10), also called the “combination problem”, refers to the mystery of how the micro-experiences mediated by supposedly discrete and distributed neuronal edge-detectors, motion-detectors, shape-detectors, colour-detectors (etc) can be “bound” into unitary experiential objects (“local” binding) apprehended by a unitary experiential self (“global” binding). Neuroelectrode studies using awake, verbally competent human subjects confirm that neuronal micro-experiences exist. Classical neuroscience cannot explain how they could ever be phenomenally bound.
“Mereology” is the theory of the relations between part to whole and the relations between part to part within a whole. Scientifically literate humans find it’s natural and convenient to think of particles, macromolecules or neurons as having their own individual wavefunctions by which they can be formally represented. However, the manifest non-classicality of phenomenal binding means that in some contexts we must consider describing the entire mind-brain via a single wavefunction. Organic minds are not simply the “mereological sum” of discrete classical parts. Organic brains are not simply the “mereological sum” of discrete classical neurons.
“Quantum field theory” is the formal, mathematico-physical description of the natural world. The world is made up of the states of quantum fields, conventionally non-experiential in character, that take on discrete values. Physicists use mathematical entities known as “wavefunctions” to represent quantum states. Wavefunctions may be conceived as representing all the possible configurations of a superposed quantum system. Wavefunction(al)s are complex valued functionals on the space of field configurations. Wavefunctions in quantum mechanics are sinusoidal functions with an amplitude (a “measure”) and also a phase. The Schrödinger equation:
describes the time-evolution of a wavefunction. “Coherence” means that the phases of the wavefunction are kept constant between the coherent particles, macromolecules or (hypothetically) neurons, while “decoherence” is the effective loss of ordering of the phase angles between the components of a system in a quantum superposition. Such thermally-induced “dephasing” rapidly leads to the emergence – on a perceptual naive realist story – of classical, i.e. probabilistically additive, behaviour in the central nervous system (“CNS”), and also the illusory appearance of separate, non-interfering organic macromolecules. Hence the discrete, decohered classical neurons of laboratory microscopy and biology textbooks. Unlike classical physics, quantum mechanics deals with superpositions of probability amplitudes rather than of probabilities; hence the interference terms in the probability distribution. Decoherence should be distinguished from dissipation, i.e. the loss of energy from a system – a much slower, classical effect. Phase coherence is a quantum phenomenon with no classical analogue. If quantum theory is universally true, then any physical system such as a molecule, neuron, neuronal network or an entire mind-brain exists partly in all its theoretically allowed states, or configuration of its physical properties, simultaneously in a “quantum superposition“; informally, a “Schrödinger’s cat state”. Each state is formally represented by a complex vector in Hilbert space. Whatever overall state the nervous system is in can be represented as being a superposition of varying amounts of these particular states (“eigenstates”) where the amount that each eigenstate contributes to the overall sum is termed a component. The “Schrödinger equation” is a partial differential equation that describes how the state of a physical system changes with time. The Schrödinger equation acts on the entire probability amplitude, not merely its absolute value. The absolute value of the probability amplitude encodes information about probability densities, so to speak, whereas its phase encodes information about the interference between quantum states. On measurement by an experimenter, the value of the physical quantity in a quantum superposition will naively seem to “collapse” in an irreducibly stochastic manner, with a probability equal to the square of the coefficient of the superposition in the linear combination. If the superposition principle really breaks down in the mind-brain, as traditional Copenhagen positivists still believe, then the central conjecture of this paper is false.
“Mereological nihilism“, also known as “compositional nihilism”, is the philosophical position that objects with proper parts do not exist, whether extended in space or in time. Only basic building blocks (particles, fields, superstrings, branes, information, micro-experiences, quantum superpositions, entangled states, or whatever) without parts exist. Such ontological reductionism is untenable if the mind-brain supports macroscopic quantum coherence in the guise of bound phenomenal states because coherent neuronal superpositions describe individual physical states. Coherent superpositions of neuronal feature-detectors cannot be interpreted as classical ensembles of states. Radical ontological reductionism is even more problematic if post-Everett(11) quantum mechanics is correct: reality is exhaustively described by the time-evolution of one gigantic universal wavefunction. If such “wavefunction monism” is true, then talk of how neuronal superpositions are rapidly “destroyed” is just a linguistic convenience because a looser, heavily-disguised coherence persists within a higher-level Schrödinger equation (or its relativistic generalisation) that subsumes the previously tighter entanglement within a hierarchy of wavefunctions, all ultimately subsumed within the universal wavefunction.
“Direct realism“, also known as “naive realism”, about perception is the pre-scientific view that the mind-brain is directly acquainted with the external world. In contrast, the “world-simulation model”(12) assumed here treats the mind-brain as running a data-driven simulation of gross fitness-relevant patterns in the mind-independent environment. As an inferential realist, the world-simulationist is not committed per se to any kind of idealist ontology, physicalistic or otherwise. However, s/he will understand phenomenal consciousness as broader in scope compared to the traditional perceptual direct realist. The world-simulationist will also be less confident than the direct realist that we have any kind of pre-theoretic conceptual handle on the nature of the “physical” beyond the formalism of theoretical physics – and our own phenomenally-bound physical consciousness.
“Classical worlds” are what perceptual direct realists call the world. Quantum theory suggests that the multiverse exists in an inconceivably vast cosmological superposition. Yet within our individual perceptual world-simulations, familiar macroscopic objects 1) occupy definite positions (the “preferred basis” problem); 2) don’t readily display quantum interference effects; and 3) yield well-defined outcomes when experimentally probed. Cats are either dead or alive, not dead-and-alive. Or as one scientific populariser puts it, “Where Does All the Weirdness Go?” This paper argues that the answer lies under our virtual noses – though independent physical proof will depend on next-generation matter-wave interferometry. Phenomenally-bound classical world-simulations are the mind-dependent signature of the quantum “weirdness”. Without the superposition principle, no phenomenally-bound classical world-simulations could exist – and no minds. In short, we shouldn’t imagine superpositions of live-and-dead cats, but instead think of superpositions of colour-, shape-, edge- and motion-processing neurons. Thanks to natural selection, the content of our waking world-simulations typically appears classical; but the vehicle of the simulation that our minds run is inescapably quantum. If the world were classical it wouldn’t look like anything to anyone.
A “zombie“, sometimes called a “philosophical zombie” or “p-zombie” to avoid confusion with its lumbering Hollywood cousins, is a hypothetical organism that is materially and behaviourally identical to humans and other organic sentients but which isn’t conscious. Philosophers explore the epistemological question of how each of us can know that s/he isn’t surrounded by p-zombies. Yet we face a mystery deeper than the ancient sceptical Problem of Other Minds. If our ordinary understanding of the fundamental nature of matter and energy as described by physics is correct, and if our neurons are effectively decohered classical objects as suggested by standard neuroscience, then we all ought to be zombies. Following David Chalmers, this is called the Hard Problem of consciousness.
– Non-Materialist Physicalism: An experimentally Testable Conjecture by David Pearce
By David Pearce
Intuitively, there shouldn’t be anything to explain. Bizarrely, this doesn’t seem to be the case. One clue to the answer may be our difficulty in rigorously specifying a default state of “nothingness” from which any departure stands in need of an explanation. A dimensionless point? A timeless void? A quantum vacuum? All attempts to specify an alternative reified “nothingness” – an absence of laws, properties, objects, or events – just end up smuggling in something else instead. Specifying anything at all, including the truth-conditions for our sense of “nothingness”, requires information. Information is fundamental in physics. Information is physical. Information, physics tells us, cannot be created or destroyed. Thus wave functions in quantum mechanics don’t really collapse to yield single definite classical outcomes (cf.). Decoherence – the scrambling of phase angles between the components of a quantum superposition – doesn’t literally destroy superpositions. Not even black holes really destroy information. (cf. )
So naturally we may ask: where did information come from in the first place?
Perhaps the answer is that it didn’t. The total information content of reality is necessarily zero: the superposition principle of QM formalises inexistence.
On this story, one timeless logico-physical principle explains everything, including itself. The superposition principle of quantum mechanics formalises an informationless zero ontology – the default condition from which any notional departure would need to be explained. In 2002, Physics World readers voted Young’s double-slit experiment with single electrons as the “most beautiful experiment in physics”. (cf. Wojciech Zurek) The universal validity of the superposition principle in post-Everett QM suggests that the mystery of our existence has a scientific rather than theological explanation.). Richard Feynman liked to remark that all of quantum mechanics can be understood by carefully thinking through the implications of the double-slit experiment. Quite so; only maybe Feynman could have gone further. If Everettian QM (cf. ) is correct, reality consists of a single vast quantum-coherent superposition. Each element in the superposition, each orthogonal relative state, each “world”, is equally real. (cf. ) Most recently, the decoherence program in post-Everett quantum mechanics explains the emergence of quasi-classical branches (“worlds”) like ours from the underlying quantum field-theoretic formalism. (cf.
What does it mean to say that the information content of reality may turn out to be zero? Informally, perhaps consider the (classical) Library of Babel. (cf.) The Library of Babel contains all possible books with all possible words and letters in all possible combinations. The Library of Babel has zero information content. Yet somewhere amid the nonsense lies the complete works of Shakespeare – and you and me. However, the Library of Babel is classical. Withdrawing a book from the Library of Babel yields a single definite classical outcome – thereby creating information. Withdrawing more books creates more information. If we sum two ordinary non-zero probabilities, then we always get a bigger probability. All analogies break down somewhere. Evidently we aren’t literally living in Borges’ Library of Babel.
So instead of the classical Library of Babel, let us tighten the analogy. Imagine the quantum Library of Babel. Just as in standard probability theory, if there are two ways in QM that something can happen, then we get the total amplitude for something by summing the amplitudes for each of the two ways. If we sum two ordinary non-zero probabilities, then we always get a bigger probability. Yet because amplitudes in QM are complex numbers, summing two amplitudes can yield zero. Having two ways to do something in quantum mechanics can make it not happen. Recall again the double-slit experiment. Adding a slit to the apparatus can make particles less likely to arrive somewhere despite there being more ways to get there. Now scale up the double-slit experiment to the whole of reality. The information content of the universal state vector is zero. (cf. Jan-Markus Schwindt, “Nothing happens in the Universe of the Everett Interpretation“).
The quantum Library of Babel has no information.
Caveats? Loose ends? The superposition principle has been experimentally tested only up to the level of fullerenes, though more ambitious experiments are planned (cf. “Physicists propose ‘Schrödinger’s virus’ experiment“). Some scientists still expect the unitary Schrödinger dynamics will need to be supplemented or modified for larger systems – violating the information-less zero ontology that we’re exploring here.
Consciousness? Does the superposition principle break down in our minds? After all, we see live or dead cats, not live-and-dead-cat superpositions. Yet this assumption of classical outcomes – even non-unique classical outcomes – presupposes that we have direct perceptual access to the mind-independent world. Controversially (cf. Max Tegmark, “Why the brain is probably not a quantum computer“), perhaps the existence of our phenomenally-bound classical world-simulations itself depends on ultra-rapid quantum-coherent neuronal superpositions in the CNS. For if the superposition principle really broke down in the mind-brain, as classical neuroscience assumes, then we’d at most be so-called “micro-experiential zombies” – just patterns of discrete, decohered Jamesian neuronal “mind-dust” incapable of phenomenally simulating a live or a dead classical cat. (cf. David Chalmers’ “The Combination Problem for Panpsychism“)
This solution to the phenomenal binding problem awaits experimental falsification with tomorrow’s tools of molecular matter-wave interferometry. (cf. )
What about the countless different values of consciousness? How can an informationless zero ontology possibly explain the teeming diversity of our experience? This is a tough one. Yet just as the conserved constants in physics cancel out to zero, and just as all of mathematics can in principle be derived from the properties of the empty set, perhaps the solutions to the field-theoretic equations of QFT mathematically encode the textures of consciousness. If we had a cosmic analogue of the Rosetta stone, then we’d see that these values inescapably “cancel out” to zero too. Unfortunately, it’s hard to think of any experimental tests for this highly speculative conjecture.
“A theory that explains everything explains nothing”, protests the critic of Everettian QM. To which we may reply, rather tentatively: yes, precisely.
David Pearce is a personal inspiration. He recently had a conversation with Peter Singer, Hilary Greaves and Justin Oakley. I encourage anyone interested in hard core stuff to watch it 🙂
“Even if there is only one possible unified theory, it is just a set of rules and equations. What is it that breathes fire into the equations and makes a universe for them to describe? The usual approach of science of constructing a mathematical model cannot answer the questions of why there should be a universe for the model to describe. Why does the universe go to all the bother of existing? Is the unified theory so compelling that it brings about its own existence? Or does it need a creator, and, if so, does he have any other effect on the universe? And who created him?” (Stephen Hawking, A Brief History of Time, Bantam Books, Toronto, 1988, p.174.)
I’m going to go with “the fire in the equations of physics is consciousness itself.”