We begin with the assumption that all emergentist approaches are inadequate to solve the hard problem of experience. Consequently, it’s hard to escape the conclusion that consciousness is fundamental and that some form of panpsychism is true. Unfortunately, panpsychism faces the combination problem – why should proto-experiences combine to form full fledged experiences? Since the combination problem has resisted many attempts, we argue for compositionality as the missing ingredient needed to explain mid level experiences such as ours. Since this is controversial, we carefully present the full argument below. To begin, we assume, following Frege, that experience cannot exist without being accompanied by a subject of experience (SoE). An SoE provides the structural and spatio-temporally bounded “container” for experience and following Strawson is conceived as a thin subject. Thin subjects exhibit a phenomenal unity with different types of phenomenal content (sensations, thoughts etc.) occurring during their temporal existence. Next, following Stoljar, we invoke our ignorance of the true physical as the reason for the explanatory gap between present day physical processes (events, properties) and experience. We are therefore permitted to conceive of thin subjects as physical compositions. Compositionality has been an intensely studied area in the past twenty years. While there is no clear consensus here, we argue, following Koslicki, that a case can be made for a restricted compositionality principle and that thin subjects are physical compositions of a certain natural kind. In this view, SoEs are natural kind objects with a yet to be specified compositionality relation connecting them to the physical world. The specifics of this relation will be detailed by a new physics and at this juncture, all we can provide are guiding metaphors. We suggest that the relation binding an SoE to the physical is akin to the relation between a particle and field. In present day physics, a particle is conceived as a coherent excitation of a field and is spatially and temporally bounded (with the photon being the sole exception). Under the right set of circumstances, a particle coalesces out of a field and dissipates. We suggest that an SoE can be conceived as akin to a particle coalescing out of physical fields, persisting for a brief period of time and then dissipating – in a manner similar to the phenomenology of a thin subject. Experiences are physical properties of SoEs with the constraint (specified by a similarity metric) that SoEs belonging to the same natural kind will have similar experiences. The counter-intuitive aspect of this proposal is the unexpected “complexity” exhibited by SoE particles but we have been prepared for this by the complex behavior of elementary particles in over ninety years of experimental physics. Consequently, while it is odd at first glance to conceive of subjects of experience as particles, the spatial and temporal unity exhibited by particles as opposed to fields and the expectation that SoEs are new kinds of particles, paves the way for cementing this notion. Panpsychism and compositionality are therefore new bedfellows aiding us in resolving the hard problem.
– Talk given at The Science of Consciousness 2016, held in Tucson Arizona (slides)
A common way of viewing Everettian quantum mechanics is to say that in an act of measurement, the universe splits into two. There is a world in which the electron has x-spin up, the pointer points to “x-spin up,” and we believe the electron has x-spin up. There is another world in which the electron has x-spin down, the pointer points to “x-spin down,” and we believe the electron has x-spin down. This is why Everettian quantum mechanics is often called “the many worlds interpretation.” Because the contrary pointer readings exist in different universes, no one notices that both are read. This way of interpreting Everettian quantum mechanics raises many metaphysical difficulties. Does the pointer itself split in two? Or are there two numerically distinct pointers? If the whole universe splits into two, doesn’t this wildly violate conservation laws? There is now twice as much energy and momentum in the universe than there was just before the measurement. How plausible is it to say that the entire universe splits?
Although this “splitting universes” reading of Everett is popular (Deutsch 1985 speaks this way in describing Everett’s view, a reading originally due to Bryce Dewitt), fortunately, a less puzzling interpretation has been developed. This idea is to read Everett’s theory as he originally intended. Fundamentally, there is no splitting, only the evolution of the wave function according to the Shrödinger dynamics. To make this consistent with experience, it must be the case that there are in the quantum state branches corresponding to what we observe. However, as, for example, David Wallace has argued (2003, 2010), we need not view these branches -indeed, the branching process itself- as fundamental. Rather, these many branches or many worlds are patterns in the one universal quantum state that emerge as the result of its evolution. Wallace, building on work by Simon Saunders (1993), argues that there is a kind of dynamical process; the technical name for this process is “decoherence,” that can ground the emergence of quasi-classical branches within the quantum state. Decoherence is a process that involves an interaction between two systems (one of which may be regarded as a system and the other its environment) in which distinct components of the quantum state come to evolve independently of one another. That this occurs is the result of the wave function’s Hamiltonian, the kind of system it is. A wave function that (due to the kind of state it started out in and the Shrödinger dynamics) exhibits decoherence will enter into states capable of representation as a sum of noninteracting terms in particular basis (e.g., a position basis). When this happens, the system’s dynamics will appear classical from the perspective of the individual branches.
[…]
Note the facts about the quantum state decohering are not built into the fundamental laws. Rather, this is an accidental fact depending on the kind of state our universe started out in. The existence of these quasi-classical states is not a fundamental fact either, but something that emerges from the complex behavior of the fundamental state. The sense in which there are many worlds in this way of understanding Everettian quantum mechanics is therefore not the same as it is on the more naive approach already described. Fundamentally there is just one universe evolving according to the Schrödinger equation (or whatever is its relativistically appropriate analog). However, because of the special way this one world evolves, and in particular because parts of this world do not interfere with each other and can each on their own ground the existence of quasi-classical macro-objects that look like individual universes, it is correct in this sense to say (nonfundamentally) there are many worlds.
[…]
As metaphysicians, we are interested in the question of what the world is fundamentally like according to quantum mechanics. Some have argued that the answer these accounts give us (setting aside Bohmian mechanics for the moment) is that fundamentally all one needs to believe in is the wave function. What is the wave function? It is something that as we have already stated may be described as a field on configuration space, a space where each point can be taken to correspond to a configuration of particles, a space that has 3N dimensions where N is the number of particles. So, fundamentally, according to these versions of quantum mechanics (orthodox quantum mechanics, Everettian quantum mechanics, spontaneous collapse theories), all there is fundamentally is a wave function, a field in a high-dimensional configuration space. The view that the wave function is a fundamental object and a real, physical field on configuration space is today referred to as “wave function realism.” The view that such a wave function is everything there is fundamentally is wave function monism.
To understand wave function monism, it will be helpful to see how it represents the space on which the wave function is spread. We call this space “configuration space,” as is the norm. However, note that on the view just described, this is not an apt name because what is supposed to be fundamental on this view is the wave function, not particles. So, although the points in this space might correspond in a sense to particle configurations, what this space is fundamentally is not a space of particle configurations. Likewise, although we’ve represented the number of dimensions configuration space has as depending on the number N of particles in a system, this space’s dimensionality should not really be construed as dependent on the number of particles in a system. Nevertheless, the wave function monist need not be an eliminativist about particles. As we have seen, for example, in the Everettian approach, wave function monists can allow that there are particles, derivative entities that emerge out of the decoherent behavior of the wave function over time. Wave function monists favoring other solutions to the measurement problem can also allow that there are particles in this derivative sense. But the reason the configuration space on which the wave function is spread has the number of dimensions it does is not, in the final analysis, that there are particles. This is rather a brute fact about the wave function, and this in turn is what grounds the number of particles there are.
– The Wave Function: Essays on the Metaphysics of Quantum Mechanics. Edited by Alyssa Ney and David Z Albert (pgs. 33-34, 36-37).
Both physics and philosophy are jargon-ridden. So let’s first define some key concepts.
Both “consciousness” and “physical” are contested terms. Accurately if inelegantly, consciousness may be described following Nagel (“What is it like to be a bat?”) as the subjective what-it’s-like-ness of experience. Academic philosophers term such self-intimating “raw feels” “qualia” – whether macro-qualia or micro-qualia. The minimum unit of consciousness (or “psychon”, so to speak) has been variously claimed to be the entire universe, a person, a sub-personal neural network, an individual neuron, or the most basic entities recognised by quantum physics. In The Principles of Psychology (1890), American philosopher and psychologist William James christened these phenomenal simples “primordial mind-dust“. This paper conjectures that (1) our minds consist of ultra-rapidly decohering neuronal superpositions in strict accordance with unmodified quantum physics without the mythical “collapse of the wavefunction”; (2) natural selection has harnessed the properties of these neuronal superpositions so our minds run phenomenally-bound world-simulations; and (3) predicts that with enough ingenuity the non-classical interference signature of these conscious neuronal superpositions will be independently experimentally detectable (see 6 below) to the satisfaction of the most incredulous critic.
The “physical” may be contrasted with the supernatural or the abstract and – by dualists and epiphenomenalists, with the mental. The current absence of any satisfactory “positive” definition of the physical leads many philosophers of science to adopt instead the “via negativa“. Thus some materialists have sought stipulatively to define the physical in terms of an absence of phenomenal experience. Such a priori definitions of the nature of the physical are question-begging.
“Physicalism” is sometimes treated as the formalistic claim that the natural world is exhaustively described by the equations of physics and their solutions. Beyond these structural-relational properties of matter and energy, the term “physicalism” is also often used to make an ontological claim about the intrinsic character of whatever the equations describe. This intrinsic character, or metaphysical essence, is typically assumed to be non-phenomenal. “Strawsonian physicalists” (cf. “Consciousness and Its Place in Nature: Does Physicalism Entail Panpsychism?”) dispute any such assumption. Traditional reductive physicalism proposes that the properties of larger entities are determined by properties of their physical parts. If the wavefunction monism of post-Everett quantum mechanics assumed here is true, then the world does not contain discrete physical parts as understood by classical physics.
“Materialism” is the metaphysical doctrine that the world is made of intrinsically non-phenomenal “stuff”. Materialism and physicalism are often treated as cousins and sometimes as mere stylistic variants – with “physicalism” used as a nod to how bosonic fields, for example, are not matter. “Physicalistic materialism” is the claim that physical reality is fundamentally non-experiential and that the natural world is exhaustively described by the equations of physics and their solutions.
“Panpsychism” is the doctrine that the world’s fundamental physical stuff also has primitive experiential properties. Unlike the physicalistic idealism explored here, panpsychism doesn’t claim that the world’s fundamental physical stuff is experiential.
“Epiphenomenalism” in philosophy of mind is the view that experience is caused by material states or events in the brain but does not itself cause anything; the causal efficacy of mental agency is an illusion.
For our purposes, “idealism” is the ontological claim that reality is fundamentally experiential. This use of the term should be distinguished from Berkeleyan idealism, and more generally, from subjective idealism, i.e. the doctrine that only mental contents exist: reality is mind-dependent. One potential source of confusion of contemporary scientific idealism with traditional philosophical idealism is the use by inferential realists in the theory of perception of the term “world-simulation”. The mind-dependence of one’s phenomenal world-simulation, i.e. the quasi-classical world of one’s everyday experience, does not entail the idealist claim that the mind-independent physical world is intrinsically experiential in nature – a far bolder conjecture that we nonetheless tentatively defend here.
“Physicalistic idealism” is the non-materialist physicalist claim that reality is fundamentally experiential and that the natural world is exhaustively described by the equations of physics and their solutions: more specifically, by the continuous, linear, unitary evolution of the universal wavefunction of post-Everett quantum mechanics. The “decoherence program” in contemporary theoretical physics aims to show in a rigorously quantitative manner how quasi-classicality emerges from the unitary dynamics.
“Monism” is the conjecture that reality consists of a single kind of “stuff” – be it material, experiential, spiritual, or whatever. Wavefunction monism is the view that the universal wavefunction mathematically represents, exhaustively, all there is in the world. Strictly speaking, wavefunction monism shouldn’t be construed as the claim that reality literally consists of a certain function, i.e. a mapping from some mind-wrenchingly immense configuration space to the complex numbers, but rather as the claim that every mathematical property of the wavefunction except the overall phase corresponds to some property of physical world. “Dualism”, the conjecture that reality consists of two kinds of “stuff”, comes in many flavours: naturalistic and theological; interactionist and non-interactionist; property and ontological. In the modern era, most scientifically literate monists have been materialists. But to describe oneself as both a physicalist and a monistic idealist is not the schizophrenic word-salad it sounds at first blush.
“Functionalism” in philosophy of mind is the theory that mental states are constituted solely by their functional role, i.e. by their causal relations to other mental states, perceptual inputs, and behavioural outputs. Functionalism is often associated with the idea of “substrate-neutrality”, sometimes misnamed “substrate-independence”, i.e. minds can be realised in multiple substrates and at multiple levels of abstraction. However, micro-functionalists may dispute substrate-neutrality on the grounds that one or more properties of mind, for example phenomenal binding, functionally implicate the world’s quantum-mechanical bedrock from which the quasi-classical worlds of Everett’s multiverse emerge. Thus this paper will argue that only successive quantum-coherent neuronal superpositions at naively preposterously short time-scales can explain phenomenal binding. Without phenomenal binding, no functionally adaptive classical world-simulations could exist in the first instance.
The “binding problem“(10), also called the “combination problem”, refers to the mystery of how the micro-experiences mediated by supposedly discrete and distributed neuronal edge-detectors, motion-detectors, shape-detectors, colour-detectors (etc) can be “bound” into unitary experiential objects (“local” binding) apprehended by a unitary experiential self (“global” binding). Neuroelectrode studies using awake, verbally competent human subjects confirm that neuronal micro-experiences exist. Classical neuroscience cannot explain how they could ever be phenomenally bound.
“Mereology” is the theory of the relations between part to whole and the relations between part to part within a whole. Scientifically literate humans find it’s natural and convenient to think of particles, macromolecules or neurons as having their own individual wavefunctions by which they can be formally represented. However, the manifest non-classicality of phenomenal binding means that in some contexts we must consider describing the entire mind-brain via a single wavefunction. Organic minds are not simply the “mereological sum” of discrete classical parts. Organic brains are not simply the “mereological sum” of discrete classical neurons.
“Quantum field theory” is the formal, mathematico-physical description of the natural world. The world is made up of the states of quantum fields, conventionally non-experiential in character, that take on discrete values. Physicists use mathematical entities known as “wavefunctions” to represent quantum states. Wavefunctions may be conceived as representing all the possible configurations of a superposed quantum system. Wavefunction(al)s are complex valued functionals on the space of field configurations. Wavefunctions in quantum mechanics are sinusoidal functions with an amplitude (a “measure”) and also a phase. The Schrödinger equation:
describes the time-evolution of a wavefunction. “Coherence” means that the phases of the wavefunction are kept constant between the coherent particles, macromolecules or (hypothetically) neurons, while “decoherence” is the effective loss of ordering of the phase angles between the components of a system in a quantum superposition. Such thermally-induced “dephasing” rapidly leads to the emergence – on a perceptual naive realist story – of classical, i.e. probabilistically additive, behaviour in the central nervous system (“CNS”), and also the illusory appearance of separate, non-interfering organic macromolecules. Hence the discrete, decohered classical neurons of laboratory microscopy and biology textbooks. Unlike classical physics, quantum mechanics deals with superpositions of probability amplitudes rather than of probabilities; hence the interference terms in the probability distribution. Decoherence should be distinguished from dissipation, i.e. the loss of energy from a system – a much slower, classical effect. Phase coherence is a quantum phenomenon with no classical analogue. If quantum theory is universally true, then any physical system such as a molecule, neuron, neuronal network or an entire mind-brain exists partly in all its theoretically allowed states, or configuration of its physical properties, simultaneously in a “quantum superposition“; informally, a “Schrödinger’s cat state”. Each state is formally represented by a complex vector in Hilbert space. Whatever overall state the nervous system is in can be represented as being a superposition of varying amounts of these particular states (“eigenstates”) where the amount that each eigenstate contributes to the overall sum is termed a component. The “Schrödinger equation” is a partial differential equation that describes how the state of a physical system changes with time. The Schrödinger equation acts on the entire probability amplitude, not merely its absolute value. The absolute value of the probability amplitude encodes information about probability densities, so to speak, whereas its phase encodes information about the interference between quantum states. On measurement by an experimenter, the value of the physical quantity in a quantum superposition will naively seem to “collapse” in an irreducibly stochastic manner, with a probability equal to the square of the coefficient of the superposition in the linear combination. If the superposition principle really breaks down in the mind-brain, as traditional Copenhagen positivists still believe, then the central conjecture of this paper is false.
“Mereological nihilism“, also known as “compositional nihilism”, is the philosophical position that objects with proper parts do not exist, whether extended in space or in time. Only basic building blocks (particles, fields, superstrings, branes, information, micro-experiences, quantum superpositions, entangled states, or whatever) without parts exist. Such ontological reductionism is untenable if the mind-brain supports macroscopic quantum coherence in the guise of bound phenomenal states because coherent neuronal superpositions describe individual physical states. Coherent superpositions of neuronal feature-detectors cannot be interpreted as classical ensembles of states. Radical ontological reductionism is even more problematic if post-Everett(11) quantum mechanics is correct: reality is exhaustively described by the time-evolution of one gigantic universal wavefunction. If such “wavefunction monism” is true, then talk of how neuronal superpositions are rapidly “destroyed” is just a linguistic convenience because a looser, heavily-disguised coherence persists within a higher-level Schrödinger equation (or its relativistic generalisation) that subsumes the previously tighter entanglement within a hierarchy of wavefunctions, all ultimately subsumed within the universal wavefunction.
“Direct realism“, also known as “naive realism”, about perception is the pre-scientific view that the mind-brain is directly acquainted with the external world. In contrast, the “world-simulation model”(12) assumed here treats the mind-brain as running a data-driven simulation of gross fitness-relevant patterns in the mind-independent environment. As an inferential realist, the world-simulationist is not committed per se to any kind of idealist ontology, physicalistic or otherwise. However, s/he will understand phenomenal consciousness as broader in scope compared to the traditional perceptual direct realist. The world-simulationist will also be less confident than the direct realist that we have any kind of pre-theoretic conceptual handle on the nature of the “physical” beyond the formalism of theoretical physics – and our own phenomenally-bound physical consciousness.
“Classical worlds” are what perceptual direct realists call the world. Quantum theory suggests that the multiverse exists in an inconceivably vast cosmological superposition. Yet within our individual perceptual world-simulations, familiar macroscopic objects 1) occupy definite positions (the “preferred basis” problem); 2) don’t readily display quantum interference effects; and 3) yield well-defined outcomes when experimentally probed. Cats are either dead or alive, not dead-and-alive. Or as one scientific populariser puts it, “Where Does All the Weirdness Go?” This paper argues that the answer lies under our virtual noses – though independent physical proof will depend on next-generation matter-wave interferometry. Phenomenally-bound classical world-simulations are the mind-dependent signature of the quantum “weirdness”. Without the superposition principle, no phenomenally-bound classical world-simulations could exist – and no minds. In short, we shouldn’t imagine superpositions of live-and-dead cats, but instead think of superpositions of colour-, shape-, edge- and motion-processing neurons. Thanks to natural selection, the content of our waking world-simulations typically appears classical; but the vehicle of the simulation that our minds run is inescapably quantum. If the world were classical it wouldn’t look like anything to anyone.
A “zombie“, sometimes called a “philosophical zombie” or “p-zombie” to avoid confusion with its lumbering Hollywood cousins, is a hypothetical organism that is materially and behaviourally identical to humans and other organic sentients but which isn’t conscious. Philosophers explore the epistemological question of how each of us can know that s/he isn’t surrounded by p-zombies. Yet we face a mystery deeper than the ancient sceptical Problem of Other Minds. If our ordinary understanding of the fundamental nature of matter and energy as described by physics is correct, and if our neurons are effectively decohered classical objects as suggested by standard neuroscience, then we all ought to be zombies. Following David Chalmers, this is called the Hard Problem of consciousness.