A Non-Circular Solution to the Measurement Problem: If the Superposition Principle is the Bedrock of Quantum Mechanics Why Do We Experience Definite Outcomes?

Source: Quora question – “Scientifically speaking, how serious is the measurement problem concerning the validity of the various interpretations in quantum mechanics?


David Pearce responds [emphasis mine]:

It’s serious. Science should be empirically adequate. Quantum mechanics is the bedrock of science. The superposition principle is the bedrock of quantum mechanics. So why don’t we ever experience superpositions? Why do experiments have definite outcomes? “Schrödinger’s cat” isn’t just a thought-experiment. The experiment can be done today. If quantum mechanics is complete, then microscopic superpositions should rapidly be amplified via quantum entanglement into the macroscopic realm of everyday life.

Copenhagenists are explicit. The lesson of quantum mechanics is that we must abandon realism about the micro-world. But Schrödinger’s cat can’t be quarantined. The regress spirals without end. If quantum mechanics is complete, the lesson of Schrödinger’s cat is that if one abandons realism about a micro-world, then one must abandon realism about a macro-world too. The existence of an objective physical realm independent of one’s mind is certainly a useful calculational tool. Yet if all that matters is empirical adequacy, then why invoke such superfluous metaphysical baggage? The upshot of Copenhagen isn’t science, but solipsism.

There are realist alternatives to quantum solipsism. Some physicists propose that we modify the unitary dynamics to prevent macroscopic superpositions. Roger Penrose, for instance, believes that a non-linear correction to the unitary evolution should be introduced to prevent superpositions of macroscopically distinguishable gravitational fields. Experiments to (dis)confirm the Penrose-Hameroff Orch-OR conjecture should be feasible later this century. But if dynamical collapse theories are wrong, and if quantum mechanics is complete (as most physicists believe), then “cat states” should be ubiquitous. This doesn’t seem to be what we experience.

Everettians are realists, in a sense. Unitary-only QM says that there are quasi-classical branches of the universal wavefunction where you open an infernal chamber and see a live cat, other decohered branches where you see a dead cat; branches where you perceive the detection of a spin-up electron that has passed through a Stern–Gerlach device, other branches where you perceive the detector recording a spin-down electron; and so forth. I’ve long been haunted by a horrible suspicion that unitary-only QM is right, though Everettian QM boggles the mind (cfUniverseSplitter). Yet the heart of the measurement problem from the perspective of empirical science is that one doesn’t ever see superpositions of live-and-dead cats, or detect superpositions of spin-up-and-spin-down electrons, but only definite outcomes. So the conjecture that there are other, madly proliferating decohered branches of the universal wavefunction where different versions of you record different definite outcomes doesn’t solve the mystery of why anything anywhere ever seems definite to anyone at all. Therefore, the problem of definite outcomes in QM isn’t “just” a philosophical or interpretational issue, but an empirical challenge for even the most hard-nosed scientific positivist. “Science” that isn’t empirically adequate isn’t science: it’s metaphysics. Some deeply-buried background assumption(s) or presupposition(s) that working physicists are making must be mistaken. But which? To quote the 2016 International Workshop on Quantum Observers organized by the IJQF,

“…the measurement problem in quantum mechanics is essentially the determinate-experience problem. The problem is to explain how the linear quantum dynamics can be compatible with the existence of our definite experience. This means that in order to finally solve the measurement problem it is necessary to analyze the observer who is physically in a superposition of brain states with definite measurement records. Indeed, such quantum observers exist in all main realistic solutions to the measurement problem, including Bohm’s theory, Everett’s theory, and even the dynamical collapse theories. Then, what does it feel like to be a quantum observer?

Indeed. Here I’ll just state rather than argue my tentative analysis.
Monistic physicalism is true. Quantum mechanics is formally complete. There is no consciousness-induced collapse the wave function, no “hidden variables”, nor any other modification or supplementation of the unitary Schrödinger dynamics. The wavefunction evolves deterministically according to the Schrödinger equation as a linear superposition of different states. Yet what seems empirically self-evident, namely that measurements always find a physical system in a definite state, is false(!) The received wisdom, repeated in countless textbooks, that measurements always find a physical system in a definite state reflects an erroneous theory of perception, namely perceptual direct realism. As philosophers (e.g. the “two worlds” reading of Kant) and even poets (“The brain is wider than the sky…”) have long realised, the conceptual framework of perceptual direct realism is untenable. Only inferential realism about mind-independent reality is scientifically viable. Rather than assuming that superpositions are never experienced, suspend disbelief and consider the opposite possibility. Only superpositions are ever experienced. “Observations” are superpositions, exactly as unmodified and unsupplemented quantum mechanics says they should be: the wavefunction is a complete representation of the physical state of a system, including biological minds and the pseudo-classical world-simulations they run. Not merely “It is the theory that decides what can be observed” (Einstein); quantum theory decides the very nature of “observation” itself. If so, then the superposition principle underpins one’s subjective experience of definite, well-defined classical outcomes (“observations”), whether, say, a phenomenally-bound live cat, or the detection of a spin-up electron that has passed through a Stern–Gerlach device, or any other subjectively determinate outcome. If one isn’t dreaming, tripping or psychotic, then within one’s phenomenal world-simulation, the apparent collapse of a quantum state (into one of the eigenstates of the Hermitian operator associated with the relevant observable in accordance with a probability calculated as the squared absolute value of a complex probability amplitude) consists of fleeting uncollapsed neuronal superpositions within one’s CNS. To solve the measurement problem, the neuronal vehicle of observation and its subjective content must be distinguished. The universality of the superposition principle – not its unexplained breakdown upon “observation” – underpins one’s classical-seeming world-simulation. What naïvely seems to be the external world, i.e. one’s egocentric world-simulation, is what linear superpositions of different states feel like “from the inside”: the intrinsic nature of the physical. The otherwise insoluble binding problem in neuroscience and the problem of definite outcomes in QM share a solution.

Absurd?
Yes, for sure: this minimum requirement for a successful resolution of the mystery is satisfied (“If at first the idea is not absurd, then there is no hope for it”– Einstein, again). The raw power of environmentally-induced decoherence in a warm environment like the CNS makes the conjecture intuitively flaky. Assuming unitary-only QM, the effective theoretical lifetime of neuronal “cat states” in the CNS is less than femtoseconds. Neuronal superpositions of distributed feature-processors are intuitively just “noise”, not phenomenally-bound perceptual objects. At best, the idea that sub-femtosecond neuronal superpositions could underpin our experience of law-like classicality is implausible. Yet we’re not looking for plausible theories but testable theories. Every second of selection pressure in Zurek’s sense (cf. “Quantum Darwinism”) sculpting one’s neocortical world-simulation is more intense and unremitting than four billion years of evolution as conceived by Darwin. My best guess is that interferometry will disclose a perfect structural match. If the non-classical interference signature doesn’t yield a perfect structural match, then dualism is true.

Is the quantum-theoretic version of the intrinsic nature argument for non-materialist physicalism – more snappily, “Schrödinger’s neurons” – a potential solution to the measurement problem? Or a variant of the “word salad” interpretation of quantum mechanics?
Sadly, I can guess.
But if there were one experiment that I could do, one loophole I’d like to see closed via interferometry, then this would be it.


 

The Phenomenal Character of LSD + MDMA (Candy-Flipping) According to Cognitive Scientist Steve Lehar

Excerpt from: The Grand Illusion: A Psychonautical Odyssey Into the Depths of Human Experience (pages 60-62) by Steve Lehar (emphasis and links are mine)


Ecstasy

About this time I had the good fortune of locating a supply of ecstasy. True to its name, ecstasy promotes a kind of euphoric jitteryness, in which it is just a thrill to be alive! Every fiber of your being is just quivering with energy. But ecstasy also has some interesting perceptual manifestations. In the first place there is a kind of jitteryness across the whole visual field. And this jitteryness is so pronounced that it can manifest itself in your eyeballs, that jitter back and forth at a blinding speed. If you relax, and just let the jitters take over, the oscillations of your eyes will blur the whole scene into a peculiar double image. But if you concentrate, and focus, the ocular jitter can be made to subside, and thus become less noticeable or bothersome. One of my friends got the ocular jitters so bad that he could not control them, and that prevented him from having a good time. That was the last time he took ecstasy. I however found it enchanting. And I analyzed that subtle jitteryness more carefully. It was not caused exclusively by jittering of the eyeball, but different objects in the perceived world also seemed to jitter endlessly between alternate states. In fact, all perceived objects jittered in this manner, creating a fuzzy blur between alternate states. This was interesting for a psychonaut! It seemed to me that I could see the mechanism of my visual brain sweeping out the image of my experience right before my eyes, like the flying spot of light that paints the television picture on the glowing phosphor screen. The refresh rate of my visual mechanism had slowed to such a point as to make this sweep visible to me. Very interesting indeed!

Candy-Flipping

Having access simultaneously to ecstasy and LSD, I tried my hand at the practice known in the drug literature as “candy flipping”, that is, taking ecstasy and LSD in combination. The combination is so unique and different from the experience of either drug in isolation, that it has earned its own unique name. Under LSD and ecstasy I could see the flickering blur of visual generation most clearly. And I saw peculiar ornamental artifacts on all perceived objects, like a Fourier representation with the higher harmonics chopped off. LSD by itself creates sharply detailed ornamental artifacts, like a transparent overlay of an ornamental lattice or filigree pattern superimposed on the visual scene, especially in darkness. Ecstasy smooths out those sharp edges and blurs them into a creamy smooth rolling experience. I would sometimes feel some part of my world suddenly bulging out to greater magnification, like a fish-eye lens distortion appearing randomly in space, stretching everything in that portion of space like a reflection in a funhouse mirror. But it was not an actual bulging that changed the shape of the visual world, but more of a seeming bulging, that was perceived in an invisible sense without actual distortion of the world. For example one time I was putting on my boots to go outside, and as I reached down to pull on a boot, I suddenly got the impression that my leg grew to ten times its normal length, but I could still reach my boot because my arms had also grown by the same proportion, as had the whole space in that part of the room. Nothing actually looked any different after this expansion, it was just my sense of the scale of the world that had undergone this transformation, and even as I contemplated this, and finished securing my boot, the world shrank down gradually back to its normal scale again and the distortion vanished.

I have theorized that the way that ecstasy achieved its creamy smoothness is by dithering or alternating so fast between perceptual alternatives as to blur them together, like a spinning propellor that appears as a semi-transparent disc. At this level of observation I was unable to get my co-trippers to see the features that I was seeing. I would ask them when they saw that line of trees, did they not see illusory projections, like a transparent overlay of vectors projecting up from the trees into the blue sky that I could see? They did not see these things. So don’t expect to see what I see when I take LSD and ecstasy. I report my observations as I experience them, but observation of the psychedelic experience is every bit as subjective and variable as any phenomenological observation of our own experience. What stands out for one observer might remain completely obscure to another.

But the features I observed in my psychedelic experience all pointed toward a single self-consistent explanation of the mechanism of experience. It appears that the spatial structure of visual experience is swept out by some kind of volumetric imaging mechanism with a periodic refresh scan, not unlike the principle of television imagery, but extended into three dimensions. This was interesting indeed!


Related Articles:

  • Quantifying Bliss – which proposes a model from first principles to explain the structural properties of an experience that makes it feel good, bad, mixed, or neutral (i.e. valence). It then derives from this model precise, empirically testable predictions for what really good experiences should look like. Specifically, MDMA euphoria is postulated to be the result of a high level of consonance between connectome-specific harmonic waves.
  • A Future for Neuroscience – which discusses the broad implications of a harmonic resonance theory of brain function for neuroscience, including new ways to conceptualize personality, and exotic states of consciousness.
  • The Pseudo-Time Arrow – which discusses a particular physicalist model to explain the experience of time by examining the patterns of *implicit causality* in networks of local binding (these terms are defined there). The bottom line being: each moment of experience contains time implicitly embedded in its geometric structure. Psychedelics, MDMA, and their combination would each have unique signature structural effects along the arrow of pseudo-time.

Taken together, these articles would provide an explanation for why MDMA has a uniquely euphoric effect. In particular, Lehar’s point that MDMA’s generalized jitteryness/dithering smooths out the sharp edges of an LSD experience would show up as the harmonization/regularization of the relationship between time-slices along the pseudo-time arrow of experience. The Symmetry Theory of Valence can then be applied in the resulting network of local binding after MDMA’s smoothing effect, leading to the peculiar insight that MDMA’s euphoric effects come from the symmetrification of experience along the axis of experiential time. The creaminess of experience produced by MDMDA that Lehar talks about feels very good precisely because it is the phenomenal character of a dissonance-free state of consciousness. Hence, the fundamental nature of pleasure is not behavioral reinforcement, the maximization of utility according to one’s utility function, or expected surprise minimization; pleasure is more fundamental and low-level than any of those properties. Pleasure, we predict, shall correspond to the degree and intensity of energized symmetries present in a bound moment of experience, and MDMA phenomenology is a clear example of what it looks like to optimize for this property.

The Pseudo-Time Arrow: Explaining Phenomenal Time With Implicit Causal Structures In Networks Of Local Binding

At this point in the trip I became something that I can not put into words… I became atemporal. I existed without time… I existed through an infinite amount of time. This concept is impossible to comprehend without having actually perceived it. Even now in retrospect it is hard to comprehend it. But I do know that I lived an eternity that night… 

 

– G.T. Currie. “Impossible to Understand Reality: An Experience with LSD

Time distortion is an effect that makes the passage of time feel difficult to keep track of and wildly distorted.

 

PsychonautWiki

Introduction

What is time? When people ask this question it is often hard to tell what they are talking about. Indeed, without making explicit one’s background philosophical assumptions this question will usually suffer from a lot of ambiguity. Is one talking about the experience of time? Or is one talking about the physical nature of time? What sort of answer would satisfy the listener? Oftentimes this implicit ambiguity is a source of tremendous confusion. Time distortion experiences deepen the mystery; the existence of exotic ways of experiencing time challenges the view that we perceive the passage of physical time directly. How to disentangle this conundrum?

Modern physics has made enormous strides in pinning down what physical time is. As we will see, one can reduce time to causality networks, and causality to patterns of conditional statistical independence. Yet in the realm of experience the issue of time remains much more elusive.

In this article we provide a simple explanatory framework that accounts for both the experience of time and its relation to physical time. We then sketch out how this framework can be used to account for exotic experiences of time. We end with some thoughts pertaining the connection between the experience of time and valence (the pleasure-pain axis), which may explain why exotic experiences of the passage of time are frequently intensely emotional in nature.

To get there, let us first lay out some key definitions and background philosophical assumptions:

Key Terminology: Physical vs. Phenomenal Time

Physical Time: This is the physical property that corresponds to what a clock measures. In philosophy of time we can distinguish between eternalism and presentism. Eternalism postulate that time is a geometric feature of the universe, best exemplified with “block universe” metaphor (i.e. where time is another dimension alongside our three spatial dimensions). Presentism, instead, postulates that only the present moment is real; the past and the future are abstractions derived from the way we experience patterns in sequences of events. The present is gone, and the future has yet to come.

Now, it used to be thought that there was a universal metronome that dictated “what time it is” in the universe. With this view one could reasonably support presentism as a viable account of time. However, ever since Einstein’s theory of relativity was empirically demonstrated we now know that there is no absolute frame of reference. Based on the fundamental unity of space and time as presented by general relativity, and the absence of an absolute frame of reference, we find novel interesting arguments in favor of eternalism and against presentism (e.g. the Rietdijk–Putnam argument). On the other hand, presentists have rightly argued that the ephemeral nature of the present is self-revealing to any subject of experience. Indeed, how can we explain the feeling of the passage of time if reality is in fact a large geometric “static” structure? While this article does not need to take sides between eternalism and presentism, we will point out that the way we explain the experience of time will in turn diminish the power of presentist arguments based on the temporal character of our experience.

Phenomenal Time: This is the way in which the passing of time feels like. Even drug naïve individuals can relate to the fact that the passage of time feels different depending on one’s state of mind. The felt sense of time depends on one’s level of arousal (deeply asleep, dreaming, tired, relaxed, alert, wide awake, etc.) and hedonic tone (depressed, anxious, joyful, relaxed, etc.). Indeed, time hangs heavy when one is in pain, and seems to run through one’s fingers when one is having a great time. More generally, when taking into account altered states of consciousness (e.g. meditation, yoga, psychedelics) we see that there is a wider range of experiential phenomena than is usually assumed. Indeed, one can see that there are strange generalizations to phenomenal time. Examples of exotic phenomenal temporalities include: tachypsychia (aka. time dilation), time reversal, short-term memory tracers, looping, “moments of eternity“, temporal branching, temporal synchronicities, timelessness, and so on. We suggest that any full account of consciousness ought to be able to explain all of these variants of phenomenal time (among other key features of consciousness).

Key Background Assumptions

We shall work under three key assumptions. First, we have indirect realism about perception. Second, we have mereological nihilism in the context of consciousness, meaning that one’s stream of consciousness is composed of discrete “moments of experience”. And third, Qualia Formalism, a view that states that each moment of experience has a mathematical structure whose features are isomorphic to the features of the experience. Let us unpack these assumptions:

1. Indirect Realism About Perception

This view also goes by the name of representationalism or simulationism (not to be confused with the simulation hypothesis). In this account, perception as a concept is shown to be muddled and confused. We do not really perceive the world per se. Rather, our brains instantiate a world-simulation that tracks fitness-relevant features of our environment. Our sensory apparatus merely selects which specific world-simulation our brain instantiates. In turn, our world-simulations causally covaries with the input our senses receive and the motor responses it elicits. Furthermore, evolutionary selection pressures, in some cases, work against accurate representations of one’s environment (so long as these are not fitness-enhancing). Hence, we could say that our perception of the world is an adaptive illusion more than an accurate depiction of our surroundings.

A great expositor of this view is Steve Lehar. We recommend his book about how psychonautical experience make clear the fact that we inhabit (and in some sense are) a world-simulation created by our brain. Below you can find some pictures from his “Cartoon Epistemology“, which narrates a dialogue between a direct and an indirect realist about perception:

This slideshow requires JavaScript.

Steve Lehar also points out that the very geometry of our world-simulation is that of a diorama. We evolved to believe that we can experience the world directly, and the geometry of our world-simulation is very well crafted to keep us under the influence of a sort of spell to makes us believe we are the little person watching the diorama. This world-simulation has a geometry that is capable of representing both nearby regions and far-away objects (and even points-at-infinity), and it represents the subject of experience with a self-model at its projective center.

We think that an account of how we experience time is possible under the assumption that experiential time is a structural feature of this world-simulation. In turn, we would argue that implicit direct realism about perception irrevocably confuses physical time and phenomenal time. For if one assumes that one somehow directly perceives the physical world, doesn’t that mean that one also perceives time? But in this case, what to make of exotic time experiences? With indirect realism we realize that we inhabit an inner world-simulation that causally co-varies with features of the environment and hence resolve to find the experience of time within the confines of one’s own skull.

2. Discrete Moments of Experience

A second key assumptions is that experiences are ontologically unitary rather than merely functionally unitary. The philosophy of mind involved in this key assumption is unfortunately rather complex and easy to misunderstand, but we can at least say the following. Intuitively, as long as one is awake an alert, it feels like one’s so-called “stream of consciousness” is an uninterrupted and continuous experience. Indeed, at the limit, some philosophers have even argued that one is a different person each day; subjects of experience are, as it were, delimited by periods of unconsciousness. We instead postulate that the continuity of experience from one moment to the next is an illusion caused be the way experience is constructed. In reality, our brains generate countless “moments of experience” every second, each with its own internal representation of the passage of time and the illusion of a continuous diachronic self.

Contrast this discretized view of experience with deflationary accounts of consciousness (which insist that there is no objective boundary that delimits a given moment of experience) and functionlist accounts of consciousness (which would postulate that experience is smeared across time over the span of hundreds of milliseconds).

The precise physical underpinnings of a moment of experience have yet to be discovered, but if monistic physicalism is to survive, it is likely that the (physical) temporal extension that a single moment of experience spans is incredibly thin (possibly no more than 10^-13 seconds). In this article we make no assumptions about the actual physical temporal extension of a moment of experience. All we need to say is that it is “short” (most likely under a millisecond).

It is worth noting that the existence of discrete moments of experience supports an Empty Individualist account of personal identity. That is, a person’s brain works as an experience machine that generates many conscious events every second, each with its own distinct coordinates in physical space-time and unique identity. We would also argue that this ontology may be compatible with Open Individualism, but the argument for this shall be left to a future article.

3. Qualia Formalism

This third key assumption states that the quality of all experiences can be modeled mathematically. More precisely, for any given moment of experience, there exists a mathematical object whose mathematical features are isomorphic the the features of the experience. At the Qualia Research Institute we take this view and run with it to see where it takes us. Which mathematical object can fully account for the myriad structural relationships between experiences is currently unknown. Yet, we think that we do not need to find the One True Mathematical Object in order to make progress in formalizing the structure of subjective experience. In this article we will simply invoke the mathematical object of directed graphs in order to encode the structure of local binding of a given experience. But first, what is “local binding”? I will borrow David Pearce’s explanation of the terms involved:

The “binding problem”, also called the “combination problem”, refers to the mystery of how the micro-experiences mediated by supposedly discrete and distributed neuronal edge-detectors, motion-detectors, shape-detectors, colour-detectors, etc., can be “bound” into unitary experiential objects (“local” binding) apprehended by a unitary experiential self (“global” binding). Neuroelectrode studies using awake, verbally competent human subjects confirm that neuronal micro-experiences exist. Classical neuroscience cannot explain how they could ever be phenomenally bound. As normally posed, the binding problem assumes rather than derives the emergence of classicality.

 

Non-Materialist Physicalism by David Pearce

In other words, “local binding” refers to the way in which the features of our experience seem to be connected and interwoven into complex phenomenal objects. We do not see a chair as merely a disparate set of colors, edges, textures, etc. Rather, we see it as an integrated whole with fine compositional structure. Its colors are “bound” to its edges which are “bound” to its immediate surrounding space and so forth.

A simple toy model for the structure of an experience can be made by saying that there are “simple qualia” such as color and edges, and “complex qualia” formed by the binding of simple qualia. In turn, we can represent an experience as a graph where each node is a simple quale and each edge is a local binding connection. The resulting globally connected graph corresponds to the “globally bound” experience. Each “moment of experience” is, thus, coarsely at any rate, a network.

While this toy model is almost certainly incomplete (indeed some features of experience may require much more sophisticated mathematical objects to be represented properly), it is fair to say that the rough outline of our experience can be represented with a network-like skeleton encoding the local binding connections. More so, as we will see, this model will suffice to account for many of the surprising features of phenomenal time (and its exotic variants).

Timeless Causality

While both physical and phenomenal time pose profound philosophical conundrums, it is important to denote that science has made a lot of progress providing formal accounts of physical time. Confusingly, even Einstein’s theory of general relativity is time-symmetric, meaning that the universe would behave the same whether time was moving forwards or backwards. Hence relativity does not provide, on its own, a direction to time. What does provide a direction to time are properties like the entropy gradient (i.e. the direction along which disorder is globally increasing) and, the focus of this article, causality as encoded in the network of statistical conditional independence. This is a mouthful, let us tackle it in more detail.

In Timeless Causality Yudkowsky argues one can tell the direction of causality, (and hence of the arrow of time) by examining how conditioning on events inform us about other events. We recommend reading the linked article for details (and for a formal account read SEP’s entry on the matter).

In the image above we have a schematic representation of two measurables (1 & 2) at several times (L, M, and R). The core idea is that we can determine the flow of causality by examining the patterns of statistical conditional independence, with questions like “if I’ve observed L1 and L2, do I gain information about M1 by learning about M2?” an so on*.

Along the same lines Wolfram has done research on how time may emerge in rule-based network modifications automata:

image-xlarge

Intriguingly, these models of time and causality are tenseless and hence eternalist. The whole universe works as a unified system in which time appears as an axis rather than a metaphysical universal metronome. But if eternalism is true, how come we can feel the passage of time? If moments of experience exist, how come we seem to experience movement and action? Shouldn’t we experience just a single static “image”, like seeing a single movie frame without being aware of the previous ones? We are now finally ready tackle these questions and explain how time may be encoded in the structure of one’s experience.

Pseudo-Time Arrow

pseudo_time_arrow_illustrated_1

Physical Time vs. Phenomenal Time (video source)

In the image above we contrast physical and phenomenal time explicitly. The top layer shows the physical state of a scene in which a ball is moving along a free-falling parabolic trajectory. In turn, a number of these states are aggregated by a process of layering (second row) into a unified “moment of experience”. As seen on the third row, each moment of experience represents the “present scene” as the composition of three slices of sensory input with a time-dependent dimming factor. Namely, the scene experienced is approximated with a weighted sum of three scenes with the most recent one being weighted the highest and the oldest the least.

In other words, at the coarsest level of organization time is encoded by layering the current input scene with faint after-images of very recent input scenes. In healthy people this process is rather subtle yet always present. Indeed, after-images are an omnipresent feature of sensory modalities (beyond sight).

A simple model to describe how after-images are layered on top of each other to generate a scene with temporal depth involves what we call “time-dependent qualia decay functions”. This function determines how quickly sensory (and internal) impressions fade over time. With e.g. psychedelics making this decay function significantly fatter (long-tailed) and stimulants making it slightly shorter (i.e. higher signal-to-noise ratio at the cost of reduced complex image formation).

With this layering process going on, and the Qualia Formalist model of experience as a network of local binding, we can further find a causal structure in experience akin to that in physical time (as explained in Timeless Causality):

Again, each node of the network represents a simple quale and each edge represents a local binding relationship between the nodes it connects. Then, we can describe the time-dependent qualia decay function as the probability that a node or an edge will vanish at each (physical) time step.

sober_pseudo_time_arrow_1

The rightmost nodes and edges are the most recent qualia triggered by sensory input. Notice how the nodes and edges vanish probabilistically with each time step, making the old layers sparsely populated.

With a sufficiently large network one would be able to decode the direction of causality (and hence of time) using the same principles of statistical conditional independence used to account for physical time. What we are proposing is that this underlies what time feels like.

Now that we understand what the pseudo-time arrow is, what can we do with it?

Explanatory Power: How the Pseudo-Time Arrow Explains Exotic Phenomenal Time

Let us use this explanatory framework on exotic experiences of time. That is, let us see how the network of local binding and its associated pseudo-time arrows can explain unusual experiences of time perception.

To start we should address the fact that tachypsychia (i.e. time dilation) could either mean (a) that “one experiences time passing at the same rate but that this rate moves at a different speed relative to the way clocks tick compared to typical perception” or, more intriguingly, (b) that “time itself feels slower, stretched, elongated, etc.”.

The former (a) is very easy to explain, while the latter requires more work. Namely, time dilation of the former variety can be explained by an accelerated or slowed down sensory sampling rate in such a way that the (physical) temporal interval between each layer is either longer or shorter than usual. In this case the structure of the network does not change; what is different is how it maps to physical time. If one were on a sensory deprivation chamber and this type of time dilation was going on one would not be able to say so since the quality of phenomenal time (as encoded in the network of local binding) remains the same as before. Perhaps compare how it feels like to see a movie in slow-motion relative to seeing it at its original speed while being perfectly sober. Since one is sober either way, what changes is how quickly the world seems to move, not how one feels inside.

The latter (b) is a lot more interesting. In particular, phenomenal time is often incredibly distorted when taking psychedelics in a way that is noticeable even in sensory deprivation chambers. In other words, it is the internal experience of the passage of time that changes rather than the layering rate relative to the external world. So how can we explain that kind of phenomenal time dilation?

Psychedelics

The most straightforward effect of psychedelics one can point out with regards to the structure of one’s experience is the fact that qualia seems to last for much longer than usual. This manifests as “tracers” in all sensory modalities. Using the vocabulary introduced above, we would say that psychedelics change the time-dependent qualia decay function by making it significantly “fatter”. While in sober conditions the positive after-image of a lamp will last between 0.2 and 1 second, on psychedelics it will last anywhere between 2 and 15 seconds. This results in a much more pronounced and perceptible change in the layering process of experience. Using Lehar’s diorama model of phenomenal space, we could represent various degrees of psychedelic intoxication with the following progression:

The first image is what one experiences while sober. The second is what one experiences if one takes, e.g. 10 micrograms of LSD (i.e. microdosing), where there is a very faint additional layer but is at times indistinguishable from sober states. The third, fourth, and fifth image represent what tracers may feel like on ~50, ~150, and ~300 micrograms of LSD, respectively. The last image is perhaps most reminiscent of DMT experiences, which provide a uniquely powerful and intense high-frequency layering at the onset of the trip.

In the graphical model of time we could say that the structure of the network changes by (1) a lower probability for each node to vanish in each (physical) time step, and (2) an even lower probability for each edge to vanish after each (physical) time step. The tracers experienced on psychedelics are more than just a layering process; the density of connections also increases. That is to say, while simple qualia lasts for longer, the connections between them are even longer-lasting. The inter-connectivity of experience is enhanced.

low_dose_lsd_pseudo_time_arrow

A low dose of a psychedelic will lead to a slow decay of simple qualia (colors, edges, etc.) and an even slower decay of connections (local binding), resulting in an elongated and densified pseudo-time arrow.

This explains why time seems to move much more slowly on psychedelics. Namely, each moment of experience has significantly more temporal depth than a corresponding sober state. To illustrate this point, here is a first-person account of this effect:

A high dose of LSD seems to distort time for me the worst… maybe in part because it simply lasts so long. At the end of an LSD trip when i’m thinking back on everything that happened my memories of the trip feel ancient.

When you’re experiencing the trip it’s possible to feel time slowing down, but more commonly for me I get this feeling when I think back on things i’ve done that day. Like “woah, remember when I was doing this. That feels like it was an eternity ago” when in reality it’s been an hour.

 

Shroomery user Subconscious in the tread “How long can a trip feel like?

On low doses of psychedelics, phenomenal time may seem to acquire a sort of high definition unusual for sober states. The incredible (and accurate) visual acuity of threshold DMT experiences is a testament to this effect, and it exemplifies what a densified pseudo-time arrow feels like:

SONY DSC

Just as small doses of DMT enhance the definition of spatial structures, so is the pseudo-time arrow made more regular and detailed, leading to a strange but compelling feeling of “HD vision”.

But this is not all. Psychedelics, in higher doses, can lead to much more savage and surrealistic changes to the pseudo-time arrow. Let us tackle a few of the more exotic variants with this explanatory framework:

Time Loops

This effect feels like being stuck in a perfectly-repeating sequence of events outside of the universe in some kind of Platonic closed timelike curve. People often accidentally induce this effect by conducting repetitive tasks or listening to repetitive sounds (which ultimately entrain this pattern). For most people this is a very unsettling experience since it produces a pronounce feeling of helplessness due to making you feel powerless about ever escaping the loop.

In terms of the causal network, this experience could be accounted for with a loop in the pseudo-time arrow of experience:

high_dose_lsd_infinite

High Dose LSD can lead to annealing and perfect “standing temporal waves” often described as “time looping” or “infinite time”

Moments of Eternity

Subjectively, so-called “Moments of Eternity” are extremely bizarre experiences that have the quality of being self-sustaining and unconditioned. It is often described in mystical terms, such as “it feels like one is connected to the eternal light of consciousness with no past and no future direction”. Whereas time loops lack some of the common features of phenomenal time such as a vanishing past, moments of eternity are even more alien as they also lack a general direction for the pseudo-time arrow.

high_dose_lsd_moment_of_eternity

High Dose LSD may also generate a pseudo-time arrow with a central source and sink to that connects all nodes.

Both time loops and moments of eternity arise from the confluence of a slower time-dependent qualia decay function and structural annealing (which is typical of feedback). As covered in previous posts, as depicted in numerous psychedelic replications, and as documented in PsychonautWiki, one of the core effects of psychedelics is to lower the symmetry detection threshold. Visually, this leads to the perception of wallpaper symmetry groups covering textures (e.g. grass, walls, etc.). But this effect is much more general than mere visual repetition; it generalizes to the pseudo-time arrow! The texture repetition via mirroring, gyrations, glides, etc. works indiscriminately across (phenomenal) time and space. As an example of this, consider the psychedelic replication gifs below and how the last one nearly achieves a standing-wave structure. On a sufficient dose, this can anneal into a space-time crystal, which may have “time looping” and/or “moment of eternity” features.

oscillation_1_5_5_75_5_1_10_0.05_signal_

Sober Input

Temporal Branching

As discussed in a previous post, a number of people report temporal branching on high doses of psychedelics. The reported experience can be described as simultaneously perceiving multiple possible outcomes of a given event, and its branching causal implications. If you flip a coin, you see it both coming up heads and tails in different timelines, and both of these timelines become superimposed in your perceptual field. This experience is particularly unsettling if one interprets it through the lens of direct realism about perception. Here one imagines that the timelines are real, and that one is truly caught between branches of the multiverse. Which one is really yours? Which one will you collapse into? Eventually one finds oneself in one or another timeline with the alternatives having been pruned. An indirect realist about perception has an easier time dealing with this experience as she can interpret it as the explicit rendering of one’s predictions about the future in such a way that they interfere with one’s incoming sensory stimuli. But just in case, in the linked post we developed an empirically testable predictions from the wild possibility (i.e. where you literally experience information from adjacent branches of the multiverse) and tested it using quantum random number generators (and, thankfully for our collective sanity, obtained null results).

high_dose_lsd_branching

High Dose LSD Pseudo-Time Arrow Branching, as described in trip reports where people seem to experience “multiple branches of the multiverse at once.”

Timelessness

Finally, in some situations people report the complete loss of a perceived time arrow but not due to time loops, moments of eternity, or branching, but rather, due to scrambling. This is less common on psychedelics than the previous kinds of exotic phenomenal time, but it still happens, and is often very disorienting and unpleasant (an “LSD experience failure mode” so to speak). It is likely that this also happens on anti-psychotics and quite possibly with some anti-depressants, which seem to destroy unpleasant states by scrambling the network of local binding (rather than annealing it, as with most euphoric drugs).

pseudo_time_arrow_loss

Loss of the Pseudo-Time Arrow (bad trips? highly scrambled states caused by anti-psychotics?)

In summary, this framework can tackle some of the weirdest and most exotic experiences of time. It renders subjective time legible to formal systems. And although it relies on an unrealistically simple formalism for the mathematical structure of consciousness, the traction we are getting is strong enough to make this approach a promising starting point for future developments in philosophy of time perception.

We will now conclude with a few final thoughts…

Hyperbolic Geometry

Intriguingly, with compounds such as DMT, the layering process is so fast that on doses above the threshold level one very quickly loses track of the individual layers. In turn, one’s mind attempts to bind together the incoming layers, which leads to attempts of stitching together multiple layers in a small (phenomenal) space. This confusion between layers compounded with a high density of edges is the way we explained the unusual geometric features of DMT hallucinations, such as the spatial hyperbolic symmetry groups expressed in its characteristic visual texture repetition (cf. eli5). One’s mind tries to deal with multiple copies of e.g. the wall in front, and the simplest way to do so is to stitch them together in a woven Chrysanthemum pattern with hyperbolic wrinkles.

Implementation Level of Abstraction

It is worth noting that this account of phenomenal time lives at the algorithmic layer of Marr’s levels of abstraction, and hence is an algorithmic reduction (cf. Algorithmic Reduction of Psychedelic States). A full account would also have to deal with how these algorithmic properties are implemented physically. The point being that a phenomenal binding plus causal network account of phenomenal time work as an explanation space whether the network itself is implemented with connectome-specific harmonic wavesserotonergic control-interruption, or something more exotic.

Time and Valence

Of special interest to us is the fact that both moments of eternity and time loops tend to be experienced with very intense emotions. One could imagine that finding oneself in such an altered state is itself bewildering and therefore stunning. But there are many profoundly altered states of consciousness that lack a corresponding emotional depth. Rather, we think that this falls out of the very nature of valence and the way it is related to the structure of one’s experience.

In particular, the symmetry theory of valence (STV) we are developing at the Qualia Research Institute posits that the pleasure-pain axis is a function of the symmetry (and anti-symmetry) of the mathematical object whose features are isomorphic to an experience’s phenomenology. In the case of the simplified toy model of consciousness based on the network of local binding connections, this symmetry may manifest in the form of regularity within and across layers. Both in time loops and moments of eternity we see a much more pronounced level of symmetry of this sort than in the sober pseudo-time arrow structure. Likewise, symmetry along the pseudo-time arrow may explain the high levels of positive valence associated with music, yoga, orgasm, and concentration meditation. Each of these activities would seem to lead to repeating standing waves along the pseudo-time arrow, and hence, highly valence states. Future work shall aim to test this correspondence empirically.

QRIalpha (1)

The Qualia Research Institute Logo (timeless, as you can see)


* As Yudkowsky puts it:

causeright_2

Suppose that we do know L1 and L2, but we do not know R1 and R2. Will learning M1 tell us anything about M2? […]

The answer, on the assumption that causality flows to the right, and on the other assumptions previously given, is no. “On each round, the past values of 1 and 2 probabilistically generate the future value of 1, and then separately probabilistically generate the future value of 2.” So once we have L1 and L2, they generate M1 independently of how they generate M2.

But if we did know R1 or R2, then, on the assumptions, learning M1 would give us information about M2. […]

Similarly, if we didn’t know L1 or L2, then M1 should give us information about M2, because from the effect M1 we can infer the state of its causes L1 and L2, and thence the effect of L1/L2 on M2.



Thanks to: Mike Johnson, David Pearce, Romeo Stevens, Justin Shovelain, Andrés Silva Ruiz, Liam Brereton, and Enrique Bojorquez for their thoughts about phenomenal time and its possible mathematical underpinnings.

Anti-Tolerance Drugs

It would indeed be extraordinary if – alone among the neurotransmitter systems of the brain – the endogenous opioid families were immune from dysfunction. Enkephalins are critical to “basal hedonic tone” i.e. whether we naturally feel happy or sad. Yet the therapeutic implications of a recognition that dysfunctional endogenous opioid systems underlie a spectrum of anxiety-disorders and depression are too radical – at present – for the medical establishment to contemplate. In consequence, the use of opioid-based pharmacotherapies for “psychological” pain is officially taboo. The unique efficacy of opioids in banishing mental distress is neglected. Their unrivalled efficacy in treating “physical” nociceptive pain is grudgingly accepted.

 

Future Opioids, by David Pearce

Albert Camus wrote that the only serious question is whether to kill yourself or not. Tom Robbins wrote that the only serious question is whether time has a beginning and an end. Camus clearly got up on the wrong side of bed, and Robbins must have forgotten to set the alarm. There is only one serious question. And that is: Who knows how to make love stay? [emphasis mine] Answer me that and I will tell you whether or not to kill yourself.

 

– Still Life with Woodpecker by Tom Robbins

As eloquently argued by David Pearce in Future Opioids, the problem with opioids and other euphoriant drugs is not that they make you feel good, but that the positive feelings are short lived. In their stead, tolerance, withdrawal, and dependence ultimately set in after repeated use. We take the position that these negatives are not a necessary outcome of feeling free from physical or psychological malaise, for the brain has clever negative feedback mechanisms that prevent us from wireheading chemically. Rather, we believe that tackling these negative feedback mechanisms directly might be they key that unlocks never-ending bliss. Note that even if excellent anti-tolerance drugs were to be developed and commercialized for therapeutic use, we would still need to find solutions to the problems posed by wireheading. Specifically, disabling the negative feedback mechanisms in place that prevent us from feeling well all the time still leaves unsolved the problem of avoiding getting stuck in counterproductive patterns of behavior and becoming at risk of turning into a pure replicator (for proposed solutions to these problems see: Wireheading Done Right). Still, we strongly believe that finding safe and effective anti-tolerance drugs is a step in the right direction in the battle against suffering throughout the living world.

We thus provide the following list of promising anti-tolerance drugs in the hopes of: (1) piquing the interest of budding psychopharmacologists who may be weighting-in on promising research leads, (2) show a proof of concept against the fake and fatalistic truism that “what goes up has to go down” (cf. The Hedonistic Imperative), and last but not least, (3) provide hope to people suffering from physical or psychological distress who would benefit from anti-tolerance drugs, such as those who experience treatment-resistant anxiety, depression, chronic pain, or chemical dependence.

It is worth noting that this list is just a draft, and we will continue to revise it as the science progresses. Please let us know in the comment section if you are aware of compounds not included in this list (of special interest are tier 1 and tier 2 compounds).

Tier System

The list is organized by tiers. Tier 1 includes compounds for which there is evidence that they can reverse tolerance. Tier 2 deals with compounds that seem to either block or attenuate the development of tolerance, meaning that co-administering them with a euphoric agonist reduces the speed at which this euphoriant creates tolerance. Tier 3 includes potentiators. That is, compounds that enhance the effects of other substances without at the same time increasing tolerance to the extent that would be expected given the intensity of the subjective effects. Tier 4 lists compounds that, while not exactly tolerance-related, are still worth mentioning by virtue of reducing the intensity of drug withdrawals. And finally, Tier 5 includes euphoriants that have a favorable pharmacological profile relative to their alternatives, although will still produce tolerance long-term. Typically, a substance belonging to Tier X will also belong to Tier X + 1 and above (except for Tier 5) but we omit repetitions to avoid redundancy (e.g. proglumide not only reverses tolerance, but prevents tolerance, is a potentiatior, and reduces withdrawals).

Opioid System

Tier 1

  1. Ibogaine (see: Low dose treatment)
  2. Proglumide
  3. Naltrexone (specifically in Ultra Low Doses)
  4. Ibudilast (AV-411)

Tier 2

  1. Agmatine (may also help with chronic pain on its own)
  2. Curcumin (found in Turmeric; only works in high-availability forms)
  3. Thymoquinone (found in Nigella Sativa/black seed oil)

Tier 3

  1. DXM (specially potentiates the analgesia, which may be of use for chronic pain sufferers)
  2. Hydroxyzine (beware of its effects on sufferers of Akathisia/Restless Legs Syndrome; also bad in the long term for one’s cognitive capacity)
  3. L-Tyrosine
  4. Magnesium (possibly tier 2 but only weakly so)

Tier 4

  1. L-Aspartic Acid
  2. Ashwagandha
  3. JDTic
  4. Gabapentin
  5. Clonidine

Tier 5

  1. Tianeptine (its effects on the delta opioid receptor attenuates its tolerance when used in therapeutic doses)
  2. Mitragynine (thanks to its partial agonism rather than full agonism it is less dangerous in high doses relative to alternatives; specifically, mitragyne does not have dangerous respiratory depression properties on its own, so switching heroin addicts to it would arguably save countless lives)

 

GABA System

Tier 1

  1. Flumazenil (note: very dose-dependent)

Tier 2

  1. Tranylcypromine
  2. Ginsenosides
  3. Homotaurine
  4. Fasoracetam

Tier 3

  1. Dihydromyricetin

See also.

Dopamine System

Insufficient datapoints for a tier system. Here are the few promising leads:

  1. D-serin
  2. D-cycloserine
  3. Sulbutamine
  4. Bromantane
  5. Memantine

See also.


Tanks to Adam Karlovsky for help compiling these lists.

Psychedelic Turk: A Platform for People on Altered States of Consciousness

An interesting variable is how much external noise is optimal for peak processing. Some, like Kafka, insisted that “I need solitude for my writing; not ‘like a hermit’ – that wouldn’t be enough – but like a dead man.” Others, like von Neumann, insisted on noisy settings: von Neumann would usually work with the TV on in the background, and when his wife moved his office to a secluded room on the third floor, he reportedly stormed downstairs and demanded “What are you trying to do, keep me away from what’s going on?” Apparently, some brains can function with (and even require!) high amounts of sensory entropy, whereas others need essentially zero. One might look for different metastable thresholds and/or convergent cybernetic targets in this case.

– Mike Johnson, A future for neuroscience

My drunk or high Tweets are my best work.

– Joe Rogan, Vlog#18

Introduction

Mechanical Turk is a service that makes outsourcing simple tasks to a large number of people extremely easy. The only constraint is that the tasks outsourced ought to be the sort of thing that can be explained and performed within a browser in less than 10 minutes, which in practice is not a strong constraint for most tasks you would outsource anyway. This service is in fact a remarkably effective way to accelerate the testing of digital prototypes at a reasonable price.

I think the core idea has incredible potential in the field of interest we explore in this blog. Namely, consciousness research and the creation of consciousness technologies. Mechanical Turk is already widely used in psychology, but its usefulness could be improved further. Here is an example: Imagine an extension to Mechanical Turk in which one could choose to have the tasks completed (or attempted) by people in non-ordinary states of consciousness.

Demographic Breakdown

With Mechanical Turk you can already ask for people who belong to specific demographic categories to do your task. For example, some academics are interested in the livelihoods of people within certain ages, NLP researchers might need native speakers of a particular language, and people who want to proof-read a text may request users who have completed an undergraduate degree. The demographic categories are helpful but also coarse. In practice they tend to be used as noisy proxies for more subtle attributes. If we could multiply the categories, which ones would give the highest bang for the buck? I suspect there is a lot of interesting information to be gained from adding categories like personality, cognitive organization, and emotional temperament. What else?

States of Consciousness as Points of View

One thing to consider is that the value of a service like Mechanical Turk comes in part from the range of “points of view” that the participants bring. After all, ensemble models that incorporate diverse types of modeling approaches and datasets usually dominate in real-world machine learning competitions (e.g. Kaggle). Analogously, for a number of applications, getting feedback from someone who thinks differently than everyone already consulted is much more valuable than consulting hundreds of people similar to those already queried. Human minds, insofar as they are prediction machines, can be used as diverse models. A wide range of points of view expands the perspectives used to draw inferences, and in many real-world conditions this will be beneficial for the accuracy of an aggregated prediction. So what would a radical approach to multiplying such “points of view” entail? Arguably a very efficient way of doing so would involve people who inhabit extraordinarily different states of consciousness outside the “typical everyday” mode of being.

Jokingly, I’d very much like to see the “wisdom of the crowds enhanced with psychedelic points of view” expressed in mainstream media. I can imagine an anchorwoman on CNN saying: “according to recent polls 30% of people agree that X, now let’s break this down by state of consciousness… let’s see what the people on acid have to say… ” I would personally be very curious to hear how “the people on acid” are thinking about certain issues relative to e.g. a breakdown of points of view by political affiliation. Leaving jokes aside, why would this be a good idea? Why would anyone actually build this?

I posit that a “Mechanical Turk for People on Psychedelics” would benefit the requesters, the workers, and outsiders. Let’s start with the top three benefits for requesters: better art and marketing, enhanced problem solving, and accelerating the science of consciousness. For workers, the top reason would be making work more interesting, stimulating, and enjoyable. And from the point of view of outsiders, we could anticipate some positive externalities such as improved foundational science, accelerated commercial technology development, and better prediction markets. Let’s dive in:

Benefits to Requesters

Art and Marketing

A reason why a service like this might succeed commercially comes from the importance of understanding one’s audience in art and marketing. For example, if one is developing a product targeted to people who have a hangover (e.g. “hangover remedies”), one’s best bet would be to see how people who actually are hungover resonate with the message. Asking people who are drunk, high on weed, on empathogenic states, on psychedelics, specific psychiatric medications, etc. could certainly find its use in marketing research for sports, comedy, music shows, etc.

Basically, when the product is consumed in the sort of events in which people frequently avoid being sober for the occasion, doing market research on the same people sober might produce misleading results. What percent of concert-goers are sober the entire night? Or people watching the World Cup final? Clearly, a Mechanical Turk service with diverse states of consciousness has the potential to improve marketing epistemology.

On the art side, people who might want to be the next Alex Grey or Android Jones would benefit from prototyping new visual styles on crowds of people who are on psychedelics (i.e. the main consumers of such artistic styles).

As an aside, I would like to point out that in my opinion, artists who create audio or images that are expected to be consumed by people in altered states of consciousness have some degree of responsibility in ensuring that they are not particularly upsetting to people in such states. Indeed, some relatively innocent sounds and images might cause a lot of anxiety or trigger negative states in people on psychedelics due to the way they are processed in such states. With a Mechanical Turk for psychedelics, artists could reduce the risk of upsetting festival/concert goers who partake in psychedelic perception by screening out offending stimuli.

Problem Solving

On a more exciting note, there are a number of indications that states of consciousness as alien as those induced by major psychedelics are at times computationally suited to solve information processing tasks in competitive ways. Here are two concrete examples: First, in the sixties there was some amount of research performed on psychedelics for problem solving. A notorious example would be the 1966 study conducted by Willis Harman & James Fadiman in which mescaline was used to aid scientists, engineers, and designers in solving concrete technical problems with very positive outcomes. And second, in How to Secretly Communicate with People on LSD we delved into ways that messages could be encoded in audio-visual stimuli in such a way that only people high on psychedelics could decode them. We called this type of information concealment Psychedelic Cryptography:

These examples are just proofs of concept that there probably are a multitude of tasks for which minds under various degrees of psychedelic alteration outperform those minds in sober states. In turn, it may end up being profitable to recruit people on such states to complete your tasks when they are genuinely better at them than the sober competition. How to know when to use which state of consciousness? The system could include an algorithm that samples people from various states of consciousness to identify the most promising states to solve your particular problem and then assign the bulk of the task to them.

All of this said, the application I find the most exciting is…

Accelerating the Science of Consciousness

The psychedelic renaissance is finally getting into the territory of performance enhancement in altered states. For example, there is an ongoing study that evaluates how microdosing impacts how one plays Go, and another one that uses a self-blinding protocol to assess how microdosing influences cognitive abilities and general wellbeing.

A whole lot of information about psychedelic states can be gained by doing browser experiments with people high on them. From sensory-focused studies such as visual psychophysics and auditory hedonics to experiments involving higher-order cognition and creativity, internet-based studies of people on altered states can shed a lot of light on how the mind works. I, for one, would love to estimate the base-rate of various wallpaper symmetry groups in psychedelic visuals (cf. Algorithmic Reduction of Psychedelic States), and to study the way psychedelic states influence the pleasantness of sound. There may be no need to spend hundreds of thousands of dollars in experiments that study those questions when the cost of asking people who are on psychedelics to do tasks can be amortized by having them participate in hundreds of studies on e.g. a single LSD session.

Quantifying Bliss (36)

17 wallpaper symmetry groups

This kind of research platform would also shed light on how experiences of mental illness compare with altered states of consciousness and allow us to place the effects of common psychiatric medications on a common “map of mental states”. Let me explain. While recreational materials tend to produce the largest changes to people’s conscious experience, it should go without saying that a whole lot of psychiatric medications have unusual effects on one’s state of consciousness. For example: Most people have a hard time pin-pointing the effect of beta blockers on their experience, but it is undeniable that such compounds influence brain activity and there are suggestions that they may have long-term mood effects. Many people do report specific changes to their experience related to beta blockers, and experienced psychonauts can often compare their effects to other drugs that they may use as benchmarks. By conducting psychophysical experiments on people who are taking various major psychoactives, one would get an objective benchmark for how the mind is altered along a wide range of dimensions by each of these substances. In turn, this generalized Mechanical Turk would enable us to pin-point where much more subtle drugs fall along on this space (cf. State-Space of Drug Effects).

In other words, this platform may be revolutionary when it comes to data collection and bench-marking for psychiatric drugs in general. That said, since these compounds are more often than not used daily for several months rather than briefly or as needed, it would be hard to see how the same individual performs a certain task while on and off the medicine. This could be addressed by implementing a system allowing requesters to ask users for follow up experiments if/when the user changes his or her drug regimen.

Benefit to Users

As claimed earlier on, we believe that this type of platform would make work more enjoyable, stimulating, and interesting for many users. Indeed, there does seem to be a general trend of people wanting to contribute to science and culture by sharing their experiences in non-ordinary states of consciousness. For instance, the wonderful artists at r/replications try to make accurate depiction of various unusual states of consciousness for free. There is even an initiative to document the subjective effects of various compounds by grounding trip reports on a subjective effects index. The point being that if people are willing to share their experience and time on psychedelic states of consciousness for free, chances are that they will not complain if they can also earn money with this unusual hobby.

698okoc

LSD replication (source: r/replications)

We also know from many artists and scientists that normal everyday states of consciousness are not always the best for particular tasks. By expanding the range of states of consciousness with economic advantages, we would be allowing people to perform at their best. You may not be allowed to conduct your job while high at your workplace even if you perform it better that way. But with this kind of platform, you would have the freedom to choose the state of consciousness that optimizes your performance and be paid in kind.

Possible Downsides

It is worth mentioning that there would be challenges and negative aspects too. In general, we can probably all agree that it would suck to have to endure advertisement targeted to your particular state of consciousness. If there is a way to prevent this from happening I would love to hear it. Unfortunately, I assume that marketing will sooner or later catch on to this modus operandi, and that a Mechanical Turk for people on altered states would be used for advertisement before anything else. Making better targeted ads, it turns out, is a commercially viable way of bootstrapping all sorts of novel systems. But better advertisement indeed puts us at higher risk of being taken over by pure replicators in the broader scope, so it is worth being cautious with this application.

In the worst case scenario, we discover that very negative states of consciousness dominate other states in the arena of computational efficiency. In this scenario, the abilities useful to survive in the mental economy of the future happen to be those that employ suffering in one way or another. In that case, the evolutionary incentive gradients would lead to terrible places. For example, future minds might end up employing massive amounts of suffering to “run our servers”, so to speak. Plus, these minds would have no choice because if they don’t then they would be taken over by other minds that do, i.e. this is a race to the bottom. Scenarios like this have been considered before (1, 2, 3), and we should not ignore their warning signs.

Of course this can only happen if there are indeed computational benefits to using consciousness for information processing tasks to begin with. At Qualia Computing we generally assume that the unity of consciousness confers unique computational benefits. Hence, I would expect any outright computational use of states of consciousness is likely to involve a lot of phenomenal binding. Hence, at the evolutionary limit, conscious super-computers would probably be super-sentient. That said, the optimal hedonic tone of the minds with the highest computational efficiency is less certain. This complex matter will be dealt with elsewhere.

Concluding Discussion

Reverse Engineering Systems

A lot of people would probably agree that a video of Elon Musk high on THC may have substantially higher value than many videos of him sober. A lot of this value comes from the information gained about him by having a completely new point of view (or projection) of his mind. Reverse-engineering systems involves doing things to them to change the way they operate in order to try to reconstruct how they are put together. The same is true for the mind and the computational benefits of consciousness more broadly.

The Cost of a State of Consciousness

Another important consideration would be cost assignment for different states of consciousness. I imagine that the going rates for participants on various states would highly depend on the kind of application and profitability of these states. The price would reach a stable point that balances the usability of a state of consciousness for various tasks (demand) and its overall supply.

For problem solving in some specialized applications, for example, I could imagine “mathematician on DMT” to be a high-end sort of state of consciousness priced very highly. For example, foundational consciousness research and phenomenological studies might find such participants to be extremely valuable, as they might be helpful analyzing novel mathematical ideas and using their mathematical expertise to describe the structure of such experiences (cf. Hyperbolic Geometry of DMT Experiences).

Unfortunately, if the demand for high-end rational psychonauts never truly picks up, one might expect that people who could become professional rational psychonauts will instead work for Google or Facebook or some other high-paying company. More so, due to Lemon Markets people who do insist on hiring rational psychonauts will most likely be disappointed. Sasha Shulgin and his successors will probably only participate in such markets if the rewards are high enough to justify using their precious time on novel alien states of consciousness to do your experiment rather than theirs.

In the ideal case this type of platform might function as a spring-board to generate a critical mass of active rational psychonauts who could do each other’s experiments and replicate the results of underground researchers.

Quality Metrics

Accurately matching the task with the state of consciousness would be critical. For example, you might not necessarily want someone who is high on a large dose of acid to take a look at your tax returns*. Perhaps for mundane tasks one would want people who are on states of optimal arousal (e.g. modafinil). As mentioned earlier, a system that identifies the most promising states of consciousness for your task would be a key feature of the platform.

If we draw inspiration from the original service, we could try to make an analogous system to “Mechanical Turk Masters“. Here the service charges a higher price for requesting people who have been vetted as workers who produce high quality output. To be a Master one needs to have a high task-approval rating and have completed an absurd number of them. Perhaps top score boards and public requester prices for best work would go a long way in keeping the quality of psychedelic workers at a high level.

In practice, given the population base of people who would use this service, I would predict that to a large extent the most successful tasks in terms of engagement from the user-base will be those that have nerd-sniping qualities.** That is, make tasks that are especially fun to complete on psychedelics (and other altered states) and you would most likely get a lot of high quality work. In turn, this platform would generate the best outcomes when the tasks submitted are both fun and useful (hence benefiting both workers and requesters alike).

Keeping Consciousness Useful

Finally, we think that this kind of platform would have a lot of long-term positive externalities. In particular, making a wider range of states of consciousness economically useful goes in the general direction of keeping consciousness relevant in the future. In the absence of selection pressures that make consciousness economically useful (and hence useful to stay alive and reproduce), we can anticipate a possible drift from consciousness being somewhat in control (for now) to a point where only pure replicators matter.


Bonus content

If you are concerned with social power in a post-apocalyptic landscape, it is important that you figure out a way to induce psychedelic experiences in such a way that they cannot easily be used as weapons. E.g. it would be key to only have physiologically safe (e.g. not MDMA) and low-potency (e.g. not LSD) materials in a Mad Max scenario. For the love of God, please avoid stockpiling compounds that are both potent and physiologically dangerous (e.g. NBOMes) in your nuclear bunker! Perhaps high-potency materials could still work out if they are blended in hard-to-separate ways with fillers, but why risk it? I assume that becoming a cult leader would not be very hard if one were the only person who can procure reliable mystical experiences for people living in most post-apocalyptic scenarios. For best results make sure that the cause of the post-apocalyptic state of the world is a mystery to its inhabitants, such as in the documentary Gurren Lagann, and the historical monographs written by Philip K. Dick.


*With notable exceptions. For example, some regular cannabis users do seem to concentrate better while on manageable amounts of THC, and if the best tax attorney in your vicinity willing to do your taxes is in this predicament, I’d suggest you don’t worry too much about her highness.

**If I were a philosopher of science I would try to contribute a theory for scientific development based on nerd-sniping. Basically, how science develops is by the dynamic way in which scientists at all points are following the nerd-sniping gradient. Scientists are typically people who have their curiosity lever all the way to the top. It’s not so much that they choose their topics strategically or at random. It’s not so much a decision as it is a compulsion. Hence, the sociological implementation of science involves a collective gradient ascent towards whatever is nerd-sniping given the current knowledge. In turn, the generated knowledge from the intense focus on some area modifies what is known and changes the nerd-sniping landscape, and science moves on to other topics.

Thoughts on the ‘Is-Ought Problem’ from a Qualia Realist Point of View

tl;dr If we construct a theory of meaning grounded in qualia and felt-sense, it is possible to congruently arrive at “should” statements on the basis of reason and “is” claims. Meaning grounded in qualia allows us to import the pleasure-pain axis and its phenomenal character to the same plane of discussion as factual and structural observations.

Introduction

The Is-Ought problem (also called “Hume’s guillotine”) is a classical philosophical conundrum. On the one hand people feel that our ethical obligations (at least the uncontroversial ones like “do not torture anyone for no reason”) are facts about reality in some important sense, but on the other hand, rigorously deriving such “moral facts” from facts about the universe appears to be a category error. Is there any physical fact that truly compels us to act in one way or another?

A friend recently asked about my thoughts on this question and I took the time to express them to the best of my knowledge.

Takeaways

I provide seven points of discussion that together can be used to make the case that “ought” judgements often, though not always, are on the same ontological footing as “is” claims. Namely, that they are references to the structure and quality of experience, whose ultimate nature is self-intimating (i.e. it reveals itself) and hence inaccessible to those who lack the physiological apparatus to instantiate it. In turn, we could say that within communities of beings who share the same self-intimating qualities of experience, the is/ought divide may not be completely unbridgeable.


Summaries of Question and Response

Summary of the question:

How does a “should” emerge at all? How can reason and/or principles and/or logic compel us to follow some moral code?

Summary of the response:

  1. If “ought” statements are to be part of our worldview, then they must refer to decisions about experiences: what kinds of experiences are better/worse, what experiences should or should not exist, etc.
  2. A shared sense of personal identity (e.g. Open Individualism – which posits that “we are all one consciousness”) allows us to make parallels between the quality of our experience and the experience of others. Hence if one grounds “oughts” on the self-intimating quality of one’s suffering, then we can also extrapolate that such “oughts” must exist in the experience of other sentient beings and that they are no less real “over there” simply because a different brain is generating them (general relativity shows that every “here and now” is equally real).
  3. Reduction cuts both ways: if the “fire in the equations of physics” can feel a certain way (e.g. bliss/pain) then objective causal descriptions of reality (about e.g. brain states) are implicitly referring to precisely that which has an “ought” quality. Thus physics may be inextricably connected with moral “oughts”.
  4. If one loses sight of the fact that one’s experience is the ultimate referent for meaning, it is possible to end up in nihilistic accounts of meaning (e.g. such as Quine’s Indeterminacy of translation and Dennett’s inclusion of qualia within that framework). But if one grounds meaning in qualia, then suddenly both causality and value are on the same ontological footing (cf. Valence Realism).
  5. To see clearly the nature of value it is best to examine it at its extremes (such as MDMA bliss vs. the pain of kidney stones). Having such experiences illuminates the “ought” aspect of consciousness, in contrast to the typical quasi-anhedonic “normal everyday states of consciousness” that most people (and philosophers!) tend to reason from. It would be interesting to see philosophers discuss e.g. the Is-Ought problem while on MDMA.
  6. Claims that “pleasure and pain, value and disvalue, good and bad, etc.” are an illusion by long-term meditators based on the experience of “dissolving value” in meditative states are no more valid than claims that pain is an illusion by someone doped on morphine. In brief: such claims are made in a state of consciousness that has lost touch with the actual quality of experience that gives (dis)value to consciousness.
  7. Admittedly the idea that one state of consciousness can even refer to (let alone make value judgements about) other states of consciousness is very problematic. In what sense does “reference” even make sense? Every moment of experience only has access to its own content. We posit that this problem is not ultimately unsolvable, and that human concepts are currently mere prototypes of a much better future set of varieties of consciousness optimized for truth-finding. As a thought experiment to illustrate this possible future, consider a full-spectrum superintelligence capable of instantiating arbitrary modes of experience and impartially comparing them side by side in order to build a total order of consciousness.

Full Question and Response

Question:

I realized I don’t share some fundamental assumptions that seemed common amongst the people here [referring to the Qualia Research Institute and friends].

The most basic way I know how to phrase it, is the notion that there’s some appeal to reason and/or principles and/or logic that compels us to follow some type of moral code.

A (possibly straw-man) instance is the notion I associate with effective altruism, namely, that one should choose a career based on its calculable contribution to human welfare. The assumption is that human welfare is what we “should” care about. Why should we? What’s compelling about trying to reconfigure ourselves from whatever we value at the moment to replacing that thing with human welfare (or anything else)? What makes us think we can even truly succeed in reconfiguring ourselves like this? The obvious pitfall seems to be we create some image of “goodness” that we try to live up to without ever being honest with ourselves and owning our authentic desires. IMO this issue is rampant in mainstream Christianity.

More generally, I don’t understand how a “should” emerges within moral philosophy at all. I understand how starting with a want, say happiness, and noting a general tendency, such as I become happy when I help others, that one could deduce that helping others often is likely to result in a happy life. I might even say “I should help others” to myself, knowing it’s a strategy to get what I want. That’s not the type of “should” I’m talking about. What I’m talking about is “should” at the most basic level of one’s value structure. I don’t understand how any amount of reasoning could tell us what our most basic values and desires “should” be.

I would like to read something rigorous on this issue. I appreciate any references, as well as any elucidating replies.

Response:

This is a very important topic. I think it is great that you raise this question, as it stands at the core of many debates and arguments about ethics and morality. I think that one can indeed make a really strong case for the view that “ought” is simply never logically implied by any accurate and objective description of the world (the famous is/ought Humean guillotine). I understand that an objective assessment of all that is will usually be cast as a network of causal and structural relationships. By starting out with a network of causal and structural relationships and using logical inferences to arrive at further high-level facts, one is ultimately bound to arrive at conclusions that themselves are just structural and causal relationships. So where does the “ought” fit in here? Is it really just a manner of speaking? A linguistic spandrel that emerges from evolutionary history? It could really seem like it, and I admit that I do not have a silver bullet argument against this view.

However, I do think that eventually we will arrive at a post-Galilean understanding of consciousness, and that this understanding will itself allow us to point out exactly where- if at all- ethical imperatives are located and how they emerge. For now all I have is a series of observations that I hope can help you develop an intuition for how we are thinking about it, and why our take is original and novel (and not simply a rehashing of previous arguments or appeals to nature/intuition/guilt).

So without further ado I would like to lay out the following points on the table:

  1. I am of the mind that if any kind of “ought” is present in reality it will involve decision-making about the quality of consciousness of subjects of experience. I do not think that it makes sense to talk about an ethical imperative that has anything to do with non-experiential properties of the universe precisely because there would be no one affected by it. If there is an argument for caring about things that have no impact on any state of consciousness, I have yet to encounter it. So I will assume that the question refers to whether certain states of consciousness ought to or ought not to exist (and how to make trade offs between them).
  2. I also think that personal identity is key for this discussion, but why this is the case will make sense in a moment. The short answer is that conscious value is self-intimating/self-revealing, and in order to pass judgement on something that you yourself (as a narrative being) will not get to experience you need some confidence (or reasonable cause) to believe that the same self-intimating quality of experience is present in other narrative orbits that will not interact with you. For the same reasons as (1) above, it makes no sense to care about philosophical zombies (no matter how much they scream at you), but the same is the case for “conscious value p. zombies” (where maybe they experience color qualia but do not experience hedonic tone i.e. they can’t suffer).
  3. A very important concept that comes up again and again in our research is the notion that “reduction cuts both ways”. We take dual aspect monism seriously, and in this view we would consider the mathematical description of an experience and its qualia two sides of the same coin. Now, many people come here and say “the moment you reduce an experience of bliss to a mathematical equation you have removed any fuzzy morality from it and arrived at a purely objective and factual account which does not support an ‘ought ontology'”. But doing this mental move requires you to take the mathematical account as a superior ontology to that of the self-intimating quality of experience. In our view, these are two sides of the same coin. If mystical experiences are just a bunch of chemicals, then a bunch of chemicals can also be a mystical experience. To reiterate: reduction cuts both ways, and this happens with the value of experience to the same extent as it happens with the qualia of e.g. red or cinnamon.
  4. Mike Johnson tends to bring up Wittgenstein and Quine to the “Is-Ought” problem because they are famous for ‘reducing language and meaning’ to games and networks of relationships. But here you should realize that you can apply the concept developed in (3) above just as well to this matter. In our view, a view of language that has “words and objects” at its foundation is not a complete ontology, and nor is one that merely introduces language games to dissolve the mystery of meaning. What’s missing here is “felt sense” – the raw way in which concepts feel and operate on each other whether or not they are verbalized. It is my view that here phenomenal binding becomes critical because a felt sense that corresponds to a word, concept, referent, etc. in itself encapsulates a large amount of information simultaneously, and contains many invariants across a set of possible mental transformations that define what it is and what it is not. More so, felt senses are computationally powerful (rather than merely epiphenomenal). Consider Daniel Tammet‘s mathematical feats achieved by experiencing numbers in complex synesthetic ways that interact with each other in ways that are isomorphic to multiplication, factorization, etc. More so, he does this at competitive speeds. Language, in a sense, could be thought of as the surface of felt sense. Daniel Dennett famously argued that you can “Quine Qualia” (meaning that you can explain it away with a groundless network of relationships and referents). We, on the opposite extreme, would bite the bullet of meaning and say that meaning itself is grounded in felt-sense and qualia. Thus, colors, aromas, emotions, and thoughts, rather than being ultimately semantically groundless as Dennett would have it, turn out to be the very foundation of meaning.
  5. In light of the above, let’s consider some experiences that embody the strongest degree of the felt sense of “ought to be” and “ought not to be” that we know of. On the negative side, we have things like cluster headaches and kidney stones. On the positive side we have things like Samadhi, MDMA, and 5-MEO-DMT states of consciousness. I am personally more certain that the “ought not to be” aspect of experience is more real than the “ought to be” aspect of it, which is why I have a tendency (though no strong commitment) towards negative utilitarianism. When you touch a hot stove you get this involuntary reaction and associated valence qualia of “reality needs you to recoil from this”, and in such cases one has degrees of freedom into which to back off. But when experiencing cluster headaches and kidney stones, this sensation- that self-intimating felt-sense of ‘this ought not to be’- is omnidirectional. The experience is one in which one feels like every direction is negative, and in turn, at its extremes, one feels spiritually violated (“a major ethical emergency” is how a sufferer of cluster headaches recently described it to me). This brings me to…
  6. The apparent illusory nature of value in light of meditative deconstruction of felt-senses. As you put it elsewhere: “Introspectively – Meditators with deep experience typically report all concepts are delusion. This is realized in a very direct experiential way.” Here I am ambivalent, though my default response is to make sense of the meditation-induced feeling that “value is illusory” as itself an operation on one’s conscious topology that makes the value quality of experience get diminished or plugged out. Meditation masters will say things like “if you observe the pain very carefully, if you slice it into 30 tiny fragments per second, you will realize that the suffering you experience from it is an illusory construction”. And this kind of language itself is, IMO, liable to give off the illusion that the pain was illusory to begin with. But here I disagree. We don’t say that people who take a strong opioid to reduce acute pain are “gaining insight into the fundamental nature of pain” and that’s “why they stop experiencing it”. Rather, we understand that the strong opioid changes the neurological conditions in such a way that the quality of the pain itself is modified, which results in a duller, “asymbolic“, non-propagating, well-confined discomfort. In other words, strong opioids reduce the value-quality of pain by locally changing the nature of pain rather than by bringing about a realization of its ultimate nature. The same with meditation. The strongest difference here, I think, would be that opioids are preventing the spatial propagation of pain “symmetry breaking structures” across one’s experience and thus “confine pain to a small spatial location”, whereas meditation does something different that is better described as confining the pain to a small temporal region. This is hard to explain in full, and it will require us to fully formalize how the subjective arrow of time is constructed and how pain qualia can make copies across it. [By noting the pain very quickly one is, I believe, preventing it from building up and then having “secondary pain” which emerges from the cymatic resonance of the various lingering echoes of pain across one’s entire “pseudo-time arrow of experience”.] Sorry if this sounds like word salad, I am happy to unpack these concepts if needed, while also admitting that we are in early stages of the theoretical and empirical development.
  7. Finally, I will concede that the common sense view of “reference” is very deluded on many levels. The very notion that we can refer to an experience with another experience, that we can encode the properties of a different moment of experience in one’s current moment of experience, that we can talk about the “real world” or its “objective ethical values” or “moral duty” is very far from sensical in the final analysis. Reference is very tricky, and I think that a full understanding of consciousness will do some severe violence to our common sense in this area. That, however, is different from the self-disclosing properties of experience such as red qualia and pain qualia. You can do away with all of common sense reference while retaining a grounded understanding that “the constituents of the world are qualia values and their local binding relationships”. In turn, I do think that we can aim to do a decently good job at re-building from the ground up a good approximation of our common sense understanding of the world using “meaning grounded in qualia”, and once we do that we will be in a solid foundation (as opposed to the, admittedly very messy, quasi-delusional character of thoughts as they exist today). Needless to say, this may also need us to change our state of consciousness. “Someday we will have thoughts like sunsets” – David Pearce.

 

The Qualia Explosion

Extract from “Humans and Intelligent Machines: Co-Evolution, Fusion or Replacement?” (talk) by David Pearce

Supersentience: Turing plus Shulgin?

Compared to the natural sciences (cf. the Standard Model in physics) or computing (cf. the Universal Turing Machine), the “science” of consciousness is pre-Galilean, perhaps even pre-Socratic. State-enforced censorship of the range of subjective properties of matter and energy in the guise of a prohibition on psychoactive experimentation is a powerful barrier to knowledge. The legal taboo on the empirical method in consciousness studies prevents experimental investigation of even the crude dimensions of the Hard Problem, let alone locating a solution-space where answers to our ignorance might conceivably be found.

Singularity theorists are undaunted by our ignorance of this fundamental feature of the natural world. Instead, the Singularitarians offer a narrative of runaway machine intelligence in which consciousness plays a supporting role ranging from the minimal and incidental to the completely non-existent. However, highlighting the Singularity movement’s background assumptions about the nature of mind and intelligence, not least the insignificance of the binding problem to AGI, reveals why FUSION and REPLACEMENT scenarios are unlikely – though a measure of “cyborgification” of sentient biological robots augmented with ultrasmart software seems plausible and perhaps inevitable.

If full-spectrum superintelligence does indeed entail navigation and mastery of the manifold state-spaces of consciousness, and ultimately a seamless integration of this knowledge with the structural understanding of the world yielded by the formal sciences, then where does this elusive synthesis leave the prospects of posthuman superintelligence? Will the global proscription of radically altered states last indefinitely?

Social prophecy is always a minefield. However, there is one solution to the indisputable psychological health risks posed to human minds by empirical research into the outlandish state-spaces of consciousness unlocked by ingesting the tryptaminesphenylethylaminesisoquinolines and other pharmacological tools of sentience investigation. This solution is to make “bad trips” physiologically impossible – whether for individual investigators or, in theory, for human society as a whole. Critics of mood-enrichment technologies sometimes contend that a world animated by information-sensitive gradients of bliss would be an intellectually stagnant society: crudely, a Brave New World. On the contrary, biotech-driven mastery of our reward circuitry promises a knowledge explosion in virtue of allowing a social, scientific and legal revolution: safe, full-spectrum biological superintelligence. For genetic recalibration of hedonic set-points – as distinct from creating uniform bliss – potentially leaves cognitive function and critical insight both sharp and intact; and offers a launchpad for consciousness research in mind-spaces alien to the drug-naive imagination. A future biology of invincible well-being would not merely immeasurably improve our subjective quality of life: empirically, pleasure is the engine of value-creation. In addition to enriching all our lives, radical mood-enrichment would permit safe, systematic and responsible scientific exploration of previously inaccessible state-spaces of consciousness. If we were blessed with a biology of invincible well-being, exotic state-spaces would all be saturated with a rich hedonic tone.

Until this hypothetical world-defining transition, pursuit of the rigorous first-person methodology and rational drug-design strategy pioneered by Alexander Shulgin in PiHKAL and TiHKAL remains confined to the scientific counterculture. Investigation is risky, mostly unlawful, and unsystematic. In mainstream society, academia and peer-reviewed scholarly journals alike, ordinary waking consciousness is assumed to define the gold standard in which knowledge-claims are expressed and appraised. Yet to borrow a homely-sounding quote from Einstein, “What does the fish know of the sea in which it swims?” Just as a dreamer can gain only limited insight into the nature of dreaming consciousness from within a dream, likewise the nature of “ordinary waking consciousness” can only be glimpsed from within its confines. In order to scientifically understand the realm of the subjective, we’ll need to gain access to all its manifestations, not just the impoverished subset of states of consciousness that tended to promote the inclusive fitness of human genes on the African savannah.

Why the Proportionality Thesis Implies an Organic Singularity

So if the preconditions for full-spectrum superintelligence, i.e. access to superhuman state-spaces of sentience, remain unlawful, where does this roadblock leave the prospects of runaway self-improvement to superintelligence? Could recursive genetic self-editing of our source code repair the gap? Or will traditional human personal genomes be policed by a dystopian Gene Enforcement Agency in a manner analogous to the coercive policing of traditional human minds by the Drug Enforcement Agency?

Even in an ideal regulatory regime, the process of genetic and/or pharmacological self-enhancement is intuitively too slow for a biological Intelligence Explosion to be a live option, especially when set against the exponential increase in digital computer processing power and inorganic AI touted by Singularitarians. Prophets of imminent human demise in the face of machine intelligence argue that there can’t be a Moore’s law for organic robots. Even the Flynn Effect, the three-points-per-decade increase in IQ scores recorded during the 20th century, is comparatively puny; and in any case, this narrowly-defined intelligence gain may now have halted in well-nourished Western populations.

However, writing off all scenarios of recursive human self-enhancement would be premature. Presumably, the smarter our nonbiological AI, the more readily AI-assisted humans will be able recursively to improve our own minds with user-friendly wetware-editing tools – not just editing our raw genetic source code, but also the multiple layers of transcription and feedback mechanisms woven into biological minds. Presumably, our ever-smarter minds will be able to devise progressively more sophisticated, and also progressively more user-friendly, wetware-editing tools. These wetware-editing tools can accelerate our own recursive self-improvement – and manage potential threats from nonfriendly AGI that might harm rather than help us, assuming that our earlier strictures against the possibility of digital software-based unitary minds were mistaken. MIRI rightly call attention to how small enhancements can yield immense cognitive dividends: the relatively short genetic distance between humans and chimpanzees suggests how relatively small enhancements can exert momentous effects on a mind’s general intelligence, thereby implying that AGIs might likewise become disproportionately powerful through a small number of tweaks and improvements. In the post-genomic era, presumably exactly the same holds true for AI-assisted humans and transhumans editing their own minds. What David Chalmers calls the proportionality thesis, i.e. increases in intelligence lead to proportionate increases in the capacity to design intelligent systems, will be vindicated as recursively self-improving organic robots modify their own source code and bootstrap our way to full-spectrum superintelligence: in essence, an organic Singularity. And in contrast to classical digital zombies, superficially small molecular differences in biological minds can result in profoundly different state-spaces of sentience. Compare the ostensibly trivial difference in gene expression profiles of neurons mediating phenomenal sight and phenomenal sound – and the radically different visual and auditory worlds they yield.

Compared to FUSION or REPLACEMENT scenarios, the AI-human CO-EVOLUTION conjecture is apt to sound tame. The likelihood our posthuman successors will also be our biological descendants suggests at most a radical conservativism. In reality, a post-Singularity future where today’s classical digital zombies were superseded merely by faster, more versatile classical digital zombies would be infinitely duller than a future of full-spectrum supersentience. For all insentient information processors are exactly the same inasmuch as the living dead are not subjects of experience. They’ll never even know what it’s like to be “all dark inside” – or the computational power of phenomenal object-binding that yields illumination. By contrast, posthuman superintelligence will not just be quantitatively greater but also qualitatively alien to archaic Darwinian minds. Cybernetically enhanced and genetically rewritten biological minds can abolish suffering throughout the living world and banish experience below “hedonic zero” in our forward light-cone, an ethical watershed without precedent. Post-Darwinian life can enjoy gradients of lifelong blissful supersentience with the intensity of a supernova compared to a glow-worm. A zombie, on the other hand, is just a zombie – even if it squawks like Einstein. Posthuman organic minds will dwell in state-spaces of experience for which archaic humans and classical digital computers alike have no language, no concepts, and no words to describe our ignorance. Most radically, hyperintelligent organic minds will explore state-spaces of consciousness that do not currently play any information-signalling role in living organisms, and are impenetrable to investigation by digital zombies. In short, biological intelligence is on the brink of a recursively self-amplifying Qualia Explosion – a phenomenon of which digital zombies are invincibly ignorant, and invincibly ignorant of their own ignorance. Humans too of course are mostly ignorant of what we’re lacking: the nature, scope and intensity of such posthuman superqualia are beyond the bounds of archaic human experience. Even so, enrichment of our reward pathways can ensure that full-spectrum biological superintelligence will be sublime.


Image Credit: MohammadReza DomiriGanji

The Appearance of Arbitrary Contingency to Our Diverse Qualia

By David Pearce (Mar 21, 2012; Reddit AMA)

 

The appearance of arbitrary contingency to our diverse qualia – and undiscovered state-spaces of posthuman qualia and hypothetical micro-qualia – may be illusory. Perhaps they take the phenomenal values they do as a matter of logico-mathematical necessity. I’d make this conjecture against the backdrop of some kind of zero ontology. Intuitively, there seems no reason for anything at all to exist. The fact that the multiverse exists (apparently) confounds one’s pre-reflective intuitions in the most dramatic possible way. However, this response is too quick. The cosmic cancellation of the conserved constants (mass-energy, charge, angular momentum) to zero, and the formal equivalence of zero information to all possible descriptions [the multiverse?] means we have to take seriously this kind of explanation-space. The most recent contribution to the zero-ontology genre is physicist Lawrence Krauss’s readable but frustrating “A Universe from Nothing: Why There Is Something Rather Than Nothing“. Anyhow, how does a zero ontology tie in with (micro-)qualia? Well, if the solutions to the master equation of physics do encode the field-theoretic values of micro-qualia, then perhaps their numerically encoded textures “cancel out” to zero too. To use a trippy, suspiciously New-Agey-sounding metaphor, imagine the colours of the rainbow displayed as a glorious spectrum – but on recombination cancelling out to no colour at all. Anyhow, I wouldn’t take any of this too seriously: just speculation on idle speculation. It’s tempting simply to declare the issue of our myriad qualia to be an unfathomable mystery. And perhaps it is. But mysterianism is sterile.

Open Individualism and Antinatalism: If God could be killed, it’d be dead already

Abstract

Personal identity views (closed, empty, open) serve in philosophy the role that conservation laws play in physics. They recast difficult problems in solvable terms, and by expanding our horizon of understanding, they likewise allow us to conceive of new classes of problems. In this context, we posit that philosophy of personal identity is relevant in the realm of ethics by helping us address age-old questions like whether being born is good or bad. We further explore the intersection between philosophy of personal identity and philosophy of time, and discuss the ethical implications of antinatalism in a tenseless open individualist “block-time” universe.

Introduction

Learning physics, we often find wide-reaching concepts that simplify many problems by using an underlying principle. A good example of this is the law of conservation of energy. Take for example the following high-school physics problem:

An object that weighs X kilograms falls from a height of Y meters on a planet without an atmosphere and a gravity of Zg. Calculate the velocity with which this object will hit the ground.

One could approach this problem by using Newton’s laws of motion and differentiating the distance traveled by the object as a function of time and then obtaining the velocity of the object at the time it has fallen Y meters.

Alternatively, you could simply note that given that energy is conserved, all of the potential energy of the object at a height of X meters will be transformed into kinetic energy at 0 height. Thus the velocity of the object is equivalent to this amount, and the problem is easier to solve.

Once one has learned “the trick” one starts to see many other problems differently. In turn, grasping these deep invariants opens up new horizons; while many problems that seemed impossible can be solved using these principles, it also allows you to ask new questions, which opens up new problems that cannot be solved with those principles alone.

Does this ever happen in philosophy? Perhaps entire classes of difficult problems in philosophy may become trivial (or at least tractable) once one grasps powerful principles. Such is the case, I would claim, of transcending common-sense views of personal identity.

Personal Identity: Closed, Empty, Open

In Ontological Qualia I discussed three core views about personal identity. For those who have not encountered these concepts, I recommend reading that article for an expanded discussion.

In brief:

  1. Closed Individualism: You start existing when you are born, and stop when you die.
  2. Empty Individualism: You exist as a “time-slice” or “moment of experience.”
  3. Open Individualism: There is only one subject of experience, who is everyone.

This slideshow requires JavaScript.

Most people are Closed Individualists; this is the default common sense view for good evolutionary reasons. But what grounds are there to believe in this view? Intuitively, the fact that you will wake up in “your body” tomorrow is obvious and needs no justification. However, explaining why this is the case in a clear way requires formalizing a wide range of concepts such as causality, continuity, memory, and physical laws. And when one tries to do so one will generally find a number of barriers that will prevent one from making a solid case for Closed Individualism.

As an example line of argument, one could argue that what defines you as an individual is your set of memories, and since the person who will wake up in your body tomorrow is the only human being with access to your current memories then you must be it. And while this may seem to work on the surface, a close inspection reveals otherwise. In particular, all of the following facts work against it: (1) memory is a constructive process and every time you remember something you remember it (slightly) differently, (2) memories are unreliable and do not always work at will (e.g. false memories), (3) it is unclear what happens if you copy all of your memories into someone else (do you become that person?), (4) how many memories can you swap with someone until you become a different person?, and so on. Here the more detailed questions one asks, the more ad-hoc modifications of the theory are needed. In the end, one is left with what appears to be just a set of conventional rules to determine whether two persons are the same for practical purposes. But it does not seem to carve nature at its joints; you’d be merely over-fitting the problem.

The same happens with most Closed Individualist accounts. You need to define what the identity carrier is, and after doing so one can identify situations in which identity is not well-defined given that identity carrier (memory, causality, shared matter, etc.).

But for both Open and Empty Individualism, identity is well-defined for any being in the universe. Either all are the same, or all are different. Critics might say that this is a trivial and uninteresting point, perhaps even just definitional. Closed Individualism seems sufficiently arbitrary, however, that questioning it is warranted, and once one does so it is reasonable to start the search for alternatives by taking a look at the trivial cases in which either all or none of the beings are the same.

More so, there are many arguments in favor of these views. They indeed solve and usefully reformulate a range of philosophical problems when applied diligently. I would argue that they play a role in philosophy that is similar to that of conservation of energy in physics. The energy conservation law has been empirically tested to extremely high levels of precision, which is something which we will have to do without in the realm of philosophy. Instead, we shall rely on powerful philosophical insights. And in addition, they make a lot of problems tractable and offer a powerful lens to interpret core difficulties in the field.

Open and Empty Individualism either solve or have bearings on: Decision theory, utilitarianism, fission/fusion, mind-uploading and mind-melding, panpsychism, etc. For now, let us focus on…

Antinatalism

Antinatalism is a philosophical view that posits that, all considered, it is better not to be born. Many philosophers could be adequately described as antinatalists, but perhaps the most widely recognized proponent is David Benatar. A key argument Benatar considers is that there might be an asymmetry between pleasure and pain. Granted, he would say, experiencing pleasure is good, and experiencing suffering is bad. But while “the absence of pain is good, even if that good is not enjoyed by anyone”, we also have that “the absence of pleasure is not bad unless there is somebody for whom this absence is a deprivation.” Thus, while being born can give rise to both good and bad, not being born can only be good.

Contrary to popular perception, antinatalists are not more selfish or amoral than others. On the contrary, their willingness to “bite the bullet” of a counter-intuitive but logically defensible argument is a sign of being willing to face social disapproval for a good cause. But along with the stereotype, it is generally true that antinatalists are temperamentally depressive. This, of course, does not invalidate their arguments. If anything, sometimes a degree of depressive realism is essential to arrive at truly sober views in philosophy. But it shouldn’t be a surprise to learn that either experiencing or having experienced suffering in the past predispose people to vehemently argue for the importance of its elimination. Having a direct acquaintance with the self-disclosing nastiness of suffering does give one a broader evidential base for commenting on the matter of pain and pleasure.

Antinatalism and Closed Individualism

Interestingly, Benatar’s argument, and those of many antinatalists, rely implicitly on personal identity background assumptions. In particular, antinatalism is usually framed in a way that assumes Closed Individualism.

The idea that a “person can be harmed by coming into existence” is developed within a conceptual framework in which the inhabitants of the universe are narrative beings. These beings have both spatial and temporal extension. And they also have the property that had the conditions previous to their birth been different, they might not have existed. But how many possible beings are there? How genetically or environmentally different do they need to be to be different beings? What happens if two beings merge? Or if they converge towards the same exact physical configuration over time?

 

This conceptual framework has counter-intuitive implications when taken to the extreme. For example, the amount of harm you do involves how many people you allow to be born, rather than how many years of suffering you prevented.

For the sake of the argument, imagine that you have control over a sentient-AI-enabled virtual environment in which you can make beings start existing and stop existing. Say that you create two beings, A and B, who are different in morally irrelevant ways (e.g. one likes blue more than red, but on average they both end up suffering and delighting in their experience with the same intensity). With Empty Individualism, you would consider giving A 20 years of life and not creating B vs. giving A and B 10 years of life each to be morally equivalent. But with Closed Individualism you would rightly worry that these two scenarios are completely different. By giving years of life to both A and B (any amount of life!) you have doubled the number of subjects who are affected by your decisions. If the gulf of individuality between two persons is infinite, as Closed Individualism would have it, by creating both A and B you have created two parallel realities, and that has an ontological effect on existence. It’s a big deal. Perhaps a way to put it succinctly would be: God considers much more carefully the question of whether to create a person who will live only 70 years versus whether to add a million years of life to an angel who has already lived for a very long time. Creating an entirely new soul is not to be taken lightly (incidentally, this may cast the pro-choice/pro-life debate in an entirely new light).

Thus, antinatalism is usually framed in a way that assumes Closed Individualism. The idea that a being is (possibly) harmed by coming into existence casts the possible solutions in terms of whether one should allow animals (or beings) to be born. But if one were to take an Open or Empty Individualist point of view, the question becomes entirely different. Namely, what kind of experiences should we allow to exist in the future…

Antinatalism and Empty Individualism

I think that the strongest case for antinatalism comes from a take on personal identity that is different than the implicit default (Closed Individualism). If you assume Empty Individualism, in particular, reality starts to seem a lot more horrible than you had imagined. Consider how in Empty Individualism fundamental entities exist as “moments of experience” rather than narrative streams. Therefore, every time that an animal suffers, what is actually happening is that some moments of experience get to have their whole existence in pain and suffering. In this light, one stops seeing people who suffer terrible happenings (e.g. kidney stones, schizophrenia, etc.) as people who are unlucky, and instead one sees their brains as experience machines capable of creating beings whose entire existence is extremely negative.

With Empty Individualism there is simply no way to “make it up to someone” for having had a bad experience in the past. Thus, out of compassion for the extremely negative moments of experience, one could argue that it might be reasonable to try to avoid this whole business of life altogether. That said, this imperative does not come from the asymmetry between pain and pleasure Benetar talks about (which as we saw implicitly requires Closed Individualism). In Empty Individualism it does not make sense to say that someone has been brought into existence. So antinatalism gets justified from a different angle, albeit one that might be even more powerful.

In my assessment, the mere possibility of Empty Individualism is a good reason to take antinatalism very seriously.

It is worth noting that the combination of Empty Individualism and Antinatalism has been (implicitly) discussed by Thomas Metzinger (cf. Benevolent Artificial Anti-Natalism (BAAN)) and FRI‘s Brian Tomasik.

Antinatalism and Open Individualism

Here is a Reddit post and then a comment on a related thread (by the same author) worth reading on this subject (indeed these artifacts motivated me to write the article you are currently reading):

There’s an interesting theory of personal existence making the rounds lately called Open Individualism. See herehere, and here. Basically, it claims that consciousness is like a single person in a huge interconnected library. One floor of the library contains all of your life’s experiences, and the other floors contain the experiences of others. Consciousness wanders the aisles, and each time he picks up a book he experiences whatever moment of life is recorded in it as if he were living it. Then he moves onto the next one (or any other random one on any floor) and experiences that one. In essence, the “experiencer” of all experience everywhere, across all conscious beings, is just one numerically identical subject. It only seems like we are each separate “experiencers” because it can only experience one perspective at a time, just like I can only experience one moment of my own life at a time. In actuality, we’re all the same person.

 

Anyway, there’s no evidence for this, but it solves a lot of philosophical problems apparently, and in any case there’s no evidence for the opposing view either because it’s all speculative philosophy.

 

But if this were true, and when I’m done living the life of this particular person, I will go on to live every other life from its internal perspective, it has some implications for antinatalism. All suffering is essentially experienced by the same subject, just through the lens of many different brains. There would be no substantial difference between three people suffering and three thousand people suffering, assuming their experiences don’t leave any impact or residue on the singular consciousness that experiences them. Even if all conscious life on earth were to end, there are still likely innumerable conscious beings elsewhere in the universe, and if Open Individualism is correct, I’ll just move on to experiencing those lives. And since I can re-experience them an infinite number of times, it makes no difference how many there are. In fact, even if I just experienced the same life over and over again ten thousand times, it wouldn’t be any different from experiencing ten thousand different lives in succession, as far as suffering is concerned.

 

The only way to end the experience of suffering would be to gradually elevate all conscious beings to a state of near-constant happiness through technology, or exterminate every conscious being like the Flood from the Halo series of games. But the second option couldn’t guarantee that life wouldn’t arise again in some other corner of the multiverse, and when it did, I’d be right there again as the conscious experiencer of whatever suffering it would endure.

 

I find myself drawn to Open Individualism. It’s not mysticism, it’s not a Big Soul or something we all merge with, it’s just a new way of conceptualizing what it feels like to be a person from the inside. Yet, it has these moral implications that I can’t seem to resolve. I welcome any input.

 

– “Open individualism and antinatalism” by Reddit user CrumbledFingers in r/antinatalism (March 23, 2017)

And on a different thread:

I have thought a lot about the implications of open individualism (which I will refer to as “universalism” from here on, as that’s the name coined by its earliest proponent, Arnold Zuboff) for antinatalism. In short, I think it has two major implications, one of which you mention. The first, as you say, is that freedom from conscious life is impossible. This is bad, but not as bad as it would be if I were aware of it from every perspective. As it stands, at least on Earth, only a small number of people have any inkling that they are me. So, it is not like experiencing the multitude of conscious events taking place across reality is any kind of burden that accumulates over time; from the perspective of each isolated nervous system, it will always appear that whatever is being experienced is the only thing I am experiencing. In this way, the fact that I am never truly unconscious does not have the same sting as it would to, for example, an insomniac, who is also never unconscious but must experience the constant wakefulness from one integrated perspective all the time.

 

It’s like being told that I will suffer total irreversible amnesia at some point in my future; while I can still expect to be the person that experiences all the confusion and anxiety of total amnesia when it happens, I must also acknowledge that the residue of any pains I would have experienced beforehand would be erased. Much of what makes consciousness a losing game is the persistence of stresses. Universalism doesn’t imply that any stresses will carry over between the nervous systems of individual beings, so the reality of my situation is by no means as nightmarish as eternal life in a single body (although, if there exists an immortal being somewhere in the universe, I am currently experiencing the nightmare of its life).

 

The second implication of this view for antinatalism is that one of the worst things about coming into existence, namely death, is placed in quite a different context. According to the ordinary view (sometimes called “closed” individualism), death permanently ends the conscious existence of an alienated self. Universalism says there is no alienated self that is annihilated upon the death of any particular mind. There are just moments of conscious experience that occur in various substrates across space and time, and I am the subject of all such experiences. Thus, the encroaching wall of perpetual darkness and silence that is usually an object of dread becomes less of a problem for those who have realized that they are me. Of course, this realization is not built into most people’s psychology and has to be learned, reasoned out, intellectually grasped. This is why procreation is still immoral, because even though I will not cease to exist when any specific organism dies, from the perspective of each one I will almost certainly believe otherwise, and that will always be a source of deep suffering for me. The fewer instances of this existential dread, however misplaced they may be, the better.

 

This is why it’s important to make more people understand the position of universalism/open individualism. In the future, long after the person typing this sentence has perished, my well-being will depend in large part on having the knowledge that I am every person. The earlier in each life I come to that understanding, and thus diminish the fear of dying, the better off I will be. Naturally, this project decreases in potential impact if conscious life is abundant in the universe, and in response to that problem I concede there is probably little hope, unless there are beings elsewhere in the universe that have comprehended who they are and are taking the same steps in their spheres of influence. My dream is that intelligent life eventually either snuffs itself out or discovers how to connect many nervous systems together, which would demonstrate to every connected mind that it has always belonged to one subject, has always been me, but I don’t have any reason to assume this is even possible on a physical level.

 

So, I suppose you are mostly right about one thing: there are no lucky ones that escape the badness of life’s worst agonies, either by virtue of a privileged upbringing or an instantaneous and painless demise. They and the less fortunate ones are all equally me. Yet, the horror of going through their experiences is mitigated somewhat in the details.

 

– A comment by CrumbledFingers in the Reddit post “Antinatalism and Open individualism“, also in r/antinatalism (March 12, 2017)

Our brain tries to make sense of metaphysical questions in wet-ware that shares computational space with a lot of adaptive survival programs. It does not matter if you have thick barriers (cf. thick and thin boundaries of the mind), the way you assess the value of situations as a human will tend to over-focus on whatever would allow you to go up Maslow’s hierarchy of needs (or, more cynically, achieve great feats as a testament to signal your genetic-fitness). Our motivational architecture is implemented in such a way that it is very good at handling questions like how to find food when you are hungry and how to play social games in a way that impresses others and leaves a social mark. Our brains utilize many heuristics based on personhood and narrative-streams when exploring the desirability of present options. We are people, and our brains are adapted to solve people problems. Not, as it turns out, general problems involving the entire state-space of possible conscious experiences.

Prandium Interruptus

Our brains render our inner world-simulation with flavors and textures of qualia to suit their evolutionary needs. This, in turn, impairs our ability to aptly represent scenarios that go beyond the range of normal human experiences. Let me illustrate this point with the following thought experiment:

Would you rather (a) have a 1-hour meal, or (b) have the same meal but at the half-hour point be instantly transformed into a simple, amnesic, and blank experience of perfectly neutral hedonic value that lasts ten quintillion years, and after that extremely long time of neither-happiness-nor-suffering ends, then resume the rest of the meal as if nothing had happened, with no memory of that long neutral period?

According to most utilitarian calculi these two scenarios ought to be perfectly equivalent. In both cases the total amount of positive and negative qualia is the same (the full duration of the meal) and the only difference is that the latter also contains a large amount of neutral experience too. Whether classical or negative, utilitarians should consider these experiences equivalent since they contain the same amount of pleasure and pain (note: some other ethical frameworks do distinguish between these cases, such as average and market utilitarianism).

Intuitively, however, (a) seems a lot better than (b). One imagines oneself having an awfully long experience, bored out of one’s mind, just wanting it to end, get it over with, and get back to enjoying the nice meal. But the very premise of the thought experiment presupposes that one will not be bored during that period of time, nor will one be wishing it to be over, or anything of the sort, considering that all of those are mental states of negative quality and the experience is supposed to be neutral.

Now this is of course a completely crazy thought experiment. Or is it?

The One-Electron View

In 1940 John Wheeler proposed to Richard Feynman the idea that all of reality is made of a single electron moving backwards and forwards in time, interfering with itself. This view has come to be regarded as the One-Electron Universe. Under Open Individualism, that one electron is you. From every single moment of experience to the next, you may have experienced life as a sextillion different animals, been 10^32 fleeting macroscropic entangled particles, and gotten stuck as a single non-interacting electron in the inter-galactic medium for googols of subjective years. Of course you will not remember any of this, because your memories, and indeed all of your motivational architecture and anticipation programs, are embedded in the brain you are instantiating right now. From that point of view, there is absolutely no trace of the experiences you had during this hiatus.

The above way of describing the one-electron view is still just an approximation. In order to see it fully, we also need to address the fact that there is no “natural” order to all of these different experiences. Every way of factorizing it and describing the history of the universe as “this happened before this happened” and “this, now that” could be equally inapplicable from the point of view of fundamental reality.

Philosophy of Time

17496270_10208752190872647_1451187529_n-640x340

Presentism is the view that only the present moment is real. The future and the past are just conceptual constructs useful to navigate the world, but not actual places that exist. The “past exists as footprints”, in a matter of speaking. “Footprints of the past” are just strangely-shaped information-containing regions of the present, including your memories. Likewise, the “future” is unrealized: a helpful abstraction which evolution gave us to survive in this world.

On the other hand, eternalism treats the future and the past as always-actualized always-real landscapes of reality. Every point in space-time is equally real. Physically, this view tends to be brought up in connection with the theory of relativity, where frame-invariant descriptions of the space-time continuum have no absolute present line. For a compelling physical case, see the Rietdijk-Putnam argument.

Eternalism has been explored in literature and spirituality extensively. To name a few artifacts: The EggHindu and Buddhist philosophy, the videos of Bob Sanders (cf. The Gap in Time, The Complexity of Time), the essays of Philip K. Dick and J. L. Borges, the poetry of T. S. Eliot, the fiction of Kurt Vonnegut Jr (TimequakeSlaughterhouse Five, etc.), and the graphic novels of Alan Moore, such as Watchmen:

Let me know in the comments if you know of any other work of fiction that explores this theme. In particular, I would love to assemble a comprehensive list of literature that explores Open Individualism and Eternalism.

Personal Identity and Eternalism

For the time being (no pun intended), let us assume that Eternalism is correct. How do Eternalism and personal identity interact? Doctor Manhattan in the above images (taken from Watchmen) exemplifies what it would be like to be a Closed Individualist Eternalist. He seems to be aware of his entire timeline at once, yet recognizes his unique identity apart from others. That said, as explained above, Closed Individualism is a distinctly unphysical theory of identity. One would thus expect of Doctor Manhattan, given his physically-grounded understanding of reality, to espouse a different theory of identity.

A philosophy that pairs Empty Individualism with Eternalism is the stuff of nightmares. Not only would we have, as with Empty Individualism alone, that some beings happen to exist entirely as beings of pain. We would also have that such unfortunate moments of experience are stuck in time. Like insects in amber, their expressions of horror and their urgency to run away from pain and suffering are forever crystallized in their corresponding spatiotemporal coordinates. I personally find this view paralyzing and sickening, though I am aware that such a reaction is not adaptive for the abolitionist project. Namely, even if “Eternalism + Empty Individualism” is a true account of reality, one ought not to be so frightened by it that one becomes incapable of working towards preventing future suffering. In this light, I adopt the attitude of “hope for the best, plan for the worst”.

Lastly, if Open Individualism and Eternalism are both true (as I suspect is the case), we would be in for what amounts to an incredibly trippy picture of reality. We are all one timeless spatiotemporal crystal. But why does this eternal crystal -who is everyone- exist? Here the one-electron view and the question “why does anything exist?” could both be simultaneously addressed with a single logico-physical principle. Namely, that the sum-total of existence contains no information to speak of. This is what David Pearce calls “Zero Ontology” (see: 1, 2, 3, 4). What you and I are, in the final analysis, is the necessary implication of there being no information; we are all a singular pattern of self-interference whose ultimate nature amounts to a dimensionless unit-sphere in Hilbert space. But this is a story for another post.

On a more grounded note, Scientific American recently ran an article that could be placed in this category of Open Individualism and Eternalism. In it the authors argue that the physical signatures of multiple-personality disorder, which explain the absence of phenomenal binding between alters that share the same brain, could be extended to explain why reality is both one and yet appears as the many. We are, in this view, all alters of the universe.

Personal Identity X Philosophy of Time X Antinatalism

Sober, scientifically grounded, and philosophically rigorous accounts of the awfulness of reality are rare. On the one hand, temperamentally happy individuals are more likely to think about the possibilities of heaven that lie ahead of us, and their heightened positive mood will likewise make them more likely to report on their findings. Temperamental depressives, on the other hand, may both investigate reality with less motivated reasoning than the euthymic and also be less likely to report on the results due to their subdued mood (“why even try? why even bother to write about it?”). Suffering in the Multiverse by David Pearce is a notable exception to this pattern. David’s essay highlights that if Eternalism is true together with Empty Individualism, there are vast regions of the multiverse filled with suffering that we can simply do nothing about (“Everett Hell Branches”). Taken together with a negative utilitarian ethic, this represents a calamity of (quite literally) astronomical proportions. And, sadly, there simply is no off-button to the multiverse as a whole. The suffering is/has/will always be there. And this means that the best we can do is to avoid the suffering of those beings in our forward-light cone (a drop relative to the size of the ocean of existence). The only hope left is to find a loop-hole in quantum mechanics that allows us to cross into other Everett branches of the multiverse and launch cosmic rescue missions. A counsel of despair or a rational prospect? Only time will tell.

Another key author that explores the intersection of these views is Mario Montano (see: Eternalism and Its Ethical Implications and The Savior Imperative).

A key point that both of these authors make is that however nasty reality might be, ethical antinatalists and negative utilitarians shouldn’t hold their breath about the possibility that reality can be destroyed. In Open Individualism plus Eternalism, the light of consciousness (perhaps what some might call the secular version of God) simply is, everywhere and eternally. If reality could be destroyed, such destruction is certainly limited to our forward light-cone. And unlike Closed Individualist accounts, it is not possible to help anyone by preventing their birth; the one subject of existence has already been born, and will never be unborn, so to speak.

Nor should ethical antinatalists and negative utilitarians think that avoiding having kids is in any way contributing to the cause of reducing suffering. It is reasonable to assume that the personality traits of agreeableness (specifically care and compassion), openness to experience, and high levels of systematizing intelligence are all over-represented among antinatalists. Insofar as these traits are needed to build a good future, antinatalists should in fact be some of the people who reproduce the most. Mario Montano says:

Hanson calls the era we live in the “dream time” since it’s evolutionarily unusual for any species to be wealthy enough to have any values beyond “survive and reproduce.” However, from an anthropic perspective in infinite dimensional Hilbert space, you won’t have any values beyond “survive and reproduce.” The you which survives will not be the one with exotic values of radical compassion for all existence that caused you to commit peaceful suicide. That memetic stream weeded himself out and your consciousness is cast to a different narrative orbit which wants to survive and reproduce his mind. Eventually. Wanting is, more often than not, a precondition for successfully attaining the object of want.

Physicalism Implies Existence Never Dies

Also, from the same essay:

Anti-natalists full of weeping benignity are literally not successful replicators. The Will to Power is life itself. It is consciousness itself. And it will be, when a superintelligent coercive singleton swallows superclusters of baryonic matter and then spreads them as the flaming word into the unconverted future light cone.

[…]

You eventually love existence. Because if you don’t, something which does swallows you, and it is that which survives.

I would argue that the above reasoning is not entirely correct in the large scheme of things*, but it is certainly applicable in the context of human-like minds and agents. See also: David Pearce’s similar criticisms to antinatalism as a policy.

This should underscore the fact that in its current guise, antinatalism is completely self-limiting. Worryingly, one could imagine an organized contingent of antinatalists conducting research on how to destroy life as efficiently as possible. Antinatalists are generally very smart, and if Eliezer Yudkowsky‘s claim that “every 18 months the minimum IQ necessary to destroy the world drops by one point” is true, we may be in for some trouble. Both Pearce’s, Montano’s, and my take is that even if something akin to negative utilitarianism is the case, we should still pursue the goal of diminishing suffering in as peaceful of a way as it is possible. The risk of trying to painlessly destroy the world and failing to do so might turn out to be ethically catastrophic. A much better bet would be, we claim, to work towards the elimination of suffering by developing commercially successful hedonic recalibration technology. This also has the benefit that both depressives and life-lovers will want to team up with you; indeed, the promise of super-human bliss can be extraordinarily motivating to people who already lead happy lives, whereas the prospect of achieving “at best nothing” sounds stale and uninviting (if not outright antagonistic) to them.

An Evolutionary Environment Set Up For Success

If we want to create a world free from suffering, we will have to contend with the fact that suffering is adaptive in certain environments. The solution here is to avoid such environments, and foster ecosystems of mind that give an evolutionary advantage to the super-happy. More so, we already have the basic ingredients to do so. In Wireheading Done Right I discussed how, right now, the economy is based on trading three core goods: (1) survival tools, (2) power, and (3) information about the state-space of consciousness. Thankfully, the world right now is populated by humans who largely choose to spend their extra income on fun rather than on trips to the sperm bank. In other words, people are willing to trade some of their expected reproductive success for good experiences. This is good because it allows the existence of an economy of information about the state-space of consciousness, and thus creates an evolutionary advantage for caring about consciousness and being good at navigating its state-space. But for this to be sustainable, we will need to find the way to make positive valence gradients (i.e. gradients of bliss) both economically useful and power-granting. Otherwise, I would argue, the part of the economy that is dedicated to trading information about the state-space of consciousness is bound to be displaced by the other two (i.e. survival and power). For a more detailed discussion on these questions see: Consciousness vs. Pure Replicators.

12565637_1182612875090077_9123676868545012453_n

Can we make the benevolent exploration of the state-space of consciousness evolutionarily advantageous?

In conclusion, to close down hell (to the extent that is physically possible), we need to take advantage of the resources and opportunities granted to us by merely living in Hanson’s “dream time” (cf. Age of Spandrels). This includes the fact that right now people are willing to spend money on new experiences (especially if novel and containing positive valence), and the fact that philosophy of personal identity can still persuade people to work towards the wellbeing of all sentient beings. In particular, scientifically-grounded arguments in favor of both Open and Empty Individualism weaken people’s sense of self and make them more receptive to care about others, regardless of their genetic relatedness. On its natural course, however, this tendency may ultimately be removed by natural selection: if those who are immune to philosophy are more likely to maximize their inclusive fitness, humanity may devolve into philosophical deafness. The solution here is to identify the ways in which philosophical clarity can help us overcome coordination problems, highlight natural ethical Schelling points, and ultimately allow us to summon a benevolent super-organism to carry forward the abolition of as much suffering as is physically possible.

And only once we have done everything in our power to close down hell in all of its guises, will we be able to enjoy the rest of our forward light-cone in good conscience. Till then, us ethically-minded folks shall relentlessly work on building universe-sized fire-extinguishers to put out the fire of Hell.


* This is for several reasons: (1) phenomenal binding is not epiphenomenal, (2) the most optimal computational valence gradients are not necessarily located on the positive side, sadly, and (3) wanting, liking, and learning are possible to disentangle.

John von Neumann

Passing of a Great Mind

John von Neumann, a Brilliant, Jovial Mathematician, was a Prodigious Servant of Science and his Country

by Clary Blair Jr. – Life Magazine (February 25th, 1957)

The world lost one of its greatest scientists when Professor John von Neumann, 54, died this month of cancer in Washington, D.C. His death, like his life’s work, passed almost unnoticed by the public. But scientists throughout the free world regarded it as a tragic loss. They knew that Von Neumann’s brilliant mind had not only advanced his own special field, pure mathematics, but had also helped put the West in an immeasurably stronger position in the nuclear arms race. Before he was 30 he had established himself as one of the world’s foremost mathematicians. In World War II he was the principal discoverer of the implosion method, the secret of the atomic bomb.

The government officials and scientists who attended the requiem mass at the Walter Reed Hospital chapel last week were there not merely in recognition of his vast contributions to science, but also to pay personal tribute to a warm and delightful personality and a selfless servant of his country.

For more than a year Von Neumann had known he was going to die. But until the illness was far advanced he continued to devote himself to serving the government as a member of the Atomic Energy Commission, to which he was appointed in 1954. A telephone by his bed connected directly with his EAC office. On several occasions he was taken downtown in a limousine to attend commission meetings in a wheelchair. At Walter Reed, where he was moved early last spring, an Air Force officer, Lieut. Colonel Vincent Ford, worked full time assisting him. Eight airmen, all cleared for top secret material, were assigned to help on a 24-hour basis. His work for the Air Force and other government departments continued. Cabinet members and military officials continually came for his advice, and on one occasion Secretary of Defence Charles Wilson, Air Force Secretary Donald Quarles and most of the top Air Force brass gathered in Von Neumann’s suite to consult his judgement while there was still time. So relentlessly did Von Neumann pursue his official duties that he risked neglecting the treatise which was to form the capstone of his work on the scientific specialty, computing machines, to which he had devoted many recent years.

von_neumann_1_1

His fellow scientists, however, did not need any further evidence of Von Neumann’s rank as a scientist – or his assured place in history. They knew that during World War II at Los Alamos Von Neumann’s development of the idea of implosion speeded up the making of the atomic bomb by at least a full year. His later work with electronic computers quickened U.S. development of the H-bomb by months. The chief designer of the H-bomb, Edward Teller, once said with wry humor that Von Neumann was “one of those rare mathematicians who could descend to the level of the physicist.” Many theoretical physicists admit that they learned more from Von Neumann in methods of scientific thinking than from any of their colleagues. Hans Bethe, who was director of the theoretical physics division at Los Alamos, says, “I have sometimes wondered whether a brain like Von Neumann’s does not indicate a species superior to that of man.”

von_neumann_2

The foremost authority on computing machines in the U.S., Von Neumann was more than anyone else responsible for the increased use of the electronic “brains” in government and industry. The machine he called MANIAC (mathematical analyzer, numerical integrator and computer), which he built at the Institute for Advanced Study in Princeton, N.J., was the prototype for most of the advanced calculating machines now in use. Another machine, NORC, which he built for the Navy, can deliver a full day’s weather prediction in a few minutes. The principal adviser to the U.S. Air Force on nuclear weapons, Von Neumann was the most influential scientific force behind the U.S. decision to embark on accelerated production of intercontinental ballistic missiles. His “theory of games,” outlined in a book which he published in 1944 in collaboration with Economist Oskar Morgenstern, opened up an entirely new branch of mathematics. Analyzing the mathematical probabilities behind games of chance, Von Neumann went on to formulate a mathematical approach to such widespread fields as economics, sociology and even military strategy. His contributions to the quantum theory, the theory which explains the emission and absorption of energy in atoms and the one on which all atomic and nuclear physics are based, were set forth in a work entitled Mathematical Foundations of Quantum Mechanics which he wrote at the age of 23. It is today one of the cornerstones of this highly specialized branch of mathematical thought.

For Von Neumann the road to success was a many-laned highway with little traffic and no speed limit. He was born in 1903 in Budapest and was of the same generation of Hungarian physicists as Edward Teller, Leo Szilard and Eugene Wigner, all of whom later worked on atomic energy development for the U.S.

The eldest of three sons of a well-to-do Jewish financier who had been decorated by the Emperor Franz Josef, John von Neumann grew up in a society which placed a premium on intellectual achievement. At the age of 6 he was able to divide two eight-digit numbers in his head. By the age of 8 he had mastered college calculus and as a trick could memorize on sight a column in a telephone book and repeat back the names, addresses and numbers. History was only a “hobby,” but by the outbreak of World War I, when he was 10, his photographic mind had absorbed most of the contents of the 46-volume works edited by the German historian Oncken with a sophistication that startled his elders.

Despite his obvious technical ability, as a young man Von Neumann wanted to follow his father’s financial career, but he was soon dissuaded. Under a kind of supertutor, a first-rank mathematician at the University of Budapest named Leopold Fejer, Von Neumann was steered into the academic world. At 21 he received two degrees – one in chemical engineering at Zurich and a PhD in mathematics from the University of Budapest. The following year, 1926, as Admiral Horthy’s rightist regime had been repressing Hungarian Jews, he moved to Göttingen, Germany, then the mathematical center of the world. It was there that he published his major work on quantum mechanics.

The young professor

His fame now spreading, Von Neumann at 23 qualified as a Privatdozent (lecturer) at the University of Berlin, one of the youngest in the school’s history. But the Nazis had already begun their march to power. In 1929 Von Neumann accepted a visiting lectureship at Princeton University and in 1930, at the age of 26, he took a job there as professor of mathematical physics – after a quick trip to Budapest to marry a vivacious 18-year-old named Mariette Kovesi. Three years later, when the Institute for Advanced Study was founded at Princeton, Von Neumann was appointed – as was Albert Einstein – to be one of its first full professors. “He was so young,” a member of the institute recalls, “that most people who saw him in the halls mistook him for a graduate student.”

von_neumann_3

Although they worked near each other in the same building, Einstein and Von Neumann were not intimate, and because their approach to scientific matters was different they never formally collaborated. A member of the institute who worked side by side with both men in the early days recalls, “Einstein’s mind was slow and contemplative. He would think about something for years. Johnny’s mind was just the opposite. It was lightning quick – stunningly fast. If you gave him a problem he either solved it right away or not at all. If he had to think about it a long time and it bored him, hist interest would begin to wander. And Johnny’s mind would not shine unless whatever he was working on had his undivided attention.” But the problems he did care about, such as his “theory of games,” absorbed him for much longer periods.

‘Proof by erasure’

Partly because of this quicksilver quality Von Neumann was not an outstanding teacher to many of his students. But for the advanced students who could ascend to his level he was inspirational. His lectures were brilliant, although at times difficult to follow because of his way of erasing and rewriting dozens of formulae on the blackboard. In explaining mathematical problems Von Neumann would write his equations hurriedly, starting at the top of the blackboard and working down. When he reached the bottom, if the problem was unfinished, he would erase the top equations and start down again. By the time he had done this two or three times most other mathematicians would find themselves unable to keep track. On one such occasion a colleague at Princeton waited until Von Neumann had finished and said, “I see. Proof by erasure.”

Von Neumann himself was perpetually interested in many fields unrelated to science. Several years ago his wife gave him a 21-volume Cambridge History set, and she is sure he memorized every name and fact in the books. “He is a major expert on all the royal family trees in Europe,” a friend said once. “He can tell you who fell in love with whom, and why, what obscure cousin this or that czar married, how many illegitimate children he had and so on.” One night during the Princeton days a world-famous expert on Byzantine history came to the Von Neumann house for a party. “Johnny and the professor got into a corner and began discussing some obscure facet,” recalls a friend who was there. “Then an argument arose over a date. Johnny insisted it was this, the professor that. So Johnny said, ‘Let’s get the book.’ They looked it up and Johnny was right. A few weeks later the professor was invited to the Von Neumann house again. He called Mrs. von Neumann and said jokingly, ‘I’ll come if Johnny promises not to discuss Byzantine history. Everybody thinks I am the world’s greatest expert in it and I want them to keep on thinking that.'”von_neumann_4

Once a friend showed him an extremely complex problem and remarked that a certain famous mathematician had taken a whole week’s journey across Russia on the Trans-Siberian Railroad to complete it. Rushing for a train, Von Neumann took the problem along. Two days later the friend received an air-mail packet from Chicago. In it was a 50-page handwritten solution to the problem. Von Neumann had added a postscript: “Running time to Chicago: 15 hours, 26 minutes.” To Von Neumann this was not an expression of vanity but of sheer delight – a hole in one.

During periods of intense intellectual concentration Von Neumann, like most of his professional colleagues, was lost in preoccupation, and the real world spun past him. He would sometimes interrupt a trip to put through a telephone call to find out why he had taken the trip in the first place.

Von Neumann believed that concentration alone was insufficient for solving some of the most difficult mathematical problems and that these are solved in the subconscious. He would often go to sleep with a problem unsolved, wake up in the morning and scribble the answer on a pad he kept on the bedside table. It was a common occurrence for him to begin scribbling with pencil and paper in the midst of a nightclub floor show or a lively party, “the noisier,” his wife says, “the better.” When his wife arranged a secluded study for Von Neumann on the third floor of the Princeton home, Von Neumann was furious. “He stormed downstairs,” says Mrs. von Neumann, “and demanded, ‘What are you trying to do, keep me away from what’s going on?’; After that he did most of his work in the living room with my phonograph blaring.”

His pride in his brain power made him easy prey to scientific jokesters. A friend once spent a week working out various steps in an obscure mathematical process. Accosting Von Neumann at a party he asked for help in solving the problem. After listening to it, Von Neumann leaned his plump frame against a door and stared blankly, his mind going through the necessary calculations. At each step in the process the friend would quickly put in, “Well, it comes out to this, doesn’t it?” After several such interruptions Von Neumann became perturbed and when his friend “beat” him to the final answer he exploded in fury. “Johnny sulked for weeks,” recalls the friend, “before he found out it was all a joke.”

He did not look like a professor. He dressed so much like a Wall Street banker that a fellow scientist once said, “Johnny, why don’t you smear some chalk dust on your coat so you look like the rest of us?” He loved to eat, especially rich sauces and desserts, and in later years was forced to diet rigidly. To him exercise was “nonsense.”

Those lively Von Neumann parties

Most card-playing bored him, although he was fascinated by the mathematical probabilities involved in poker and baccarat. He never cared for movies. “Every time we went,” his wife recalls, “he would either go to sleep or do math problems in his head.” When he could do neither he would break into violent coughing spells. What he truly loved, aside from work, was a good party. Residents of Princeton’s quiet academic community can still recall the lively goings-on at the Von Neumann’s big, rambling house on Westcott Road. “Those old geniuses got downright approachable at the Von Neumanns’,” a friend recalls. Von Neumann’s talents as a host were based on his drinks, which were strong, his repertoire of off-color limericks, which was massive, and his social ease, which was consummate. Although he could rarely remember a name, Von Neumann would escort each new guest around the room, bowing punctiliously to cover up the fact that he was not using names in introducing people.von_neumann_5

Von Neumann also had a passion for automobiles, not for tinkering with them but for driving them as if they were heavy tanks. He turned up with a new one every year at Princeton. “The way he drove, a car couldn’t possibly last more than a year,” a friend says. Von Neumann was regularly arrested for speeding and some of his wrecks became legendary. A Princeton crossroads was for a while known as “Von Neumann corner” because of the number of times the mathematician had cracked up there. He once emerged from a totally demolished car with this explanation: “I was proceeding down the road. The threes on the right were passing me in orderly fashion at 60 miles an hour. Suddenly one of them stepped out in my path. Boom!”

Mariette and John von Neumann had one child, Marina, born in 1935, who graduated from Radcliffe last June, summa cum laude, with the highest scholastic record in her class. In 1937, the year Von Neumann was elected to the National Academy of Sciences and became a naturalized citizen of the U.S., the marriage ended in divorce. The following year on a trip to Budapest he met and married Klara Dan, whom he subsequently trained to be an expert on electronic computing machines. The Von Neumann home in Princeton continued to be a center of gaiety as well as a hotel for prominent intellectual transients.

In the late 1930s Von Neumann began to receive a new type of visitor at Princeton: the military scientist and engineer. After he had handled a number of jobs for the Navy in ballistics and anti-submarine warfare, word of his talents spread, and Army Ordnance began using him more and more as a consultant at its Aberdeen Proving Ground in Maryland. As war drew nearer this kind of work took up more and more of his time.

During World War II he roved between Washington, where he had established a temporary residence, England, Los Alamos and other defense installations. When scientific groups heard Von Neumann was coming, they would set up all of their advanced mathematical problems like ducks in a shooting gallery. Then he would arrive and systematically topple them over.

After the Axis had been destroyed, Von Neumann urged that the U.S. immediately build even more powerful atomic weapons and use them before the Soviets could develop nuclear weapons of their own. It was not an emotional crusade, Von Neumann, like others, had coldly reasoned that the world had grown too small to permit nations to conduct their affairs independently of one another. He held that world government was inevitable – and the sooner the better. But he also believed it could never be established while Soviet Communism dominated half of the globe. A famous Von Neumann observation at the time: “With the Russians it is not a question of whether but when.” A hard-boiled strategist, he was one of the few scientists to advocate preventive war, and in 1950 he was remarking, “If you say why not bomb them tomorrow, I say why not today? If you say today at 5 o’clock, I say why not 1 o’clock?”von_neumann_6

In late 1949, after the Russians had exploded their first atomic bomb and the U.S. scientific community was split over whether or not the U.S. should build a hydrogen bomb, Von Neumann reduced the argument to: “It is not a question of whether we build it or not, but when do we start calculating?” When the H-bomb controversy raged, Von Neumann slipped quietly out to Los Alamos, took a desk and began work on the first mathematical steps toward building the weapon, specifically deciding which computations would be fed to which electronic computers.

Von Neumann’s principal interest in the postwar years was electronic computing machines, and his advice on computers was in demand almost everywhere. One day he was urgently summoned to the offices of the Rand Corporation, a government-sponsored scientific research organization in Santa Monica, Calif. Rand scientists had come up with a problem so complex that the electronic computers then in existence seemingly could not handle it. The scientists wanted Von Neumann to invent a new kind of computer. After listening to the scientists expound, Von Neumann broke in: “Well, gentlemen, suppose you tell me exactly what the problem is?”

For the next two hours the men at Rand lectured, scribbled on blackboards, and brought charts and tables back and forth. Von Neumann sat with his head buried in his hands. When the presentation was completed, he scribbled on a pad, stared so blankly that a Rand scientist later said he looked as if “his mind had slipped his face out of gear,” then said, “Gentlemen, you do not need the computer. I have the answer.”

While the scientists sat in stunned silence, Von Neumann reeled off the various steps which would provide the solution to the problem. Having risen to this routine challenge, Von Neumann followed up with a routine suggestion: “Let’s go to lunch.”

In 1954, when the U.S. development of the intercontinental ballistic missile was dangerously bogged down, study groups under Von Neumann’s direction began paving the way for solution of the most baffling problems: guidance, miniaturization of components, heat resistance. In less than a year Von Neumann put his O.K. on the project – but not until he had completed a relentless investigation in his own dazzlingly fast style. One day, during an ICBM meeting on the West Coast, a physicist employed by an aircraft company approached Von Neumann with a detailed plan for one phase of the project. It consisted of a tome several hundred pages long on which the physicist had worked for eight months. Von Neumann took the book and flipped through the first several pages. Then he turned it over and began reading from back to front. He jotted down a figure on a pad, then a second and a third. He looked out the window for several seconds, returned the book to the physicist and said, “It won’t work.” The physicist returned to his company. After two months of re-evaluation, he came to the same conclusion.von_neumann_7

In October 1954 Eisenhower appointed Von Neumann to the Atomic Energy Commission. Von Neumann accepted, although the Air Force and the senators who confirmed him insisted that he retain his chairmanship of the Air Force ballistic missile panel.

Von Neumann had been on the new job only six months when the pain first stuck in the left shoulder. After two examinations, the physicians at Bethesda Naval Hospital suspected cancer. Within a month Von Neumann was wheeled into surgery at the New England Deaconess Hospital in Boston. A leading pathologist, Dr. Shields Warren, examined the biopsy tissue and confirmed that the pain was a secondary cancer. Doctors began to race to discover the primary location. Several weeks later they found it in the prostate. Von Neumann, they agreed, did not have long to live.

When he heard the news Von Neumann called for Dr. Warren. He asked, “Now that this thing has come, how shall I spend the remainder of my life?”

“Well, Johnny,” Warren said, “I would stay with the commission as long as you feel up to it. But at the same time I would say that if you have any important scientific papers – anything further scientifically to say – I would get started on it right away.”

Von Neumann returned to Washington and resumed his busy schedule at the Atomic Energy Commission. To those who asked about his arm, which was in a sling, he muttered something about a broken collarbone. He continued to preside over the ballistic missile committee, and to receive an unending stream of visitors from Los Alamos, Livermore, the Rand Corporation, Princeton. Most of these men knew that Von Neumann was dying of cancer, but the subject was never mentioned.

Machines creating new machines

After the last visitor had departed Von Neumann would retire to his second-floor study to work on the paper which he knew would be his last contribution to science. It was an attempt to formulate a concept shedding new light on the workings of the human brain. He believed that if such a concept could be stated with certainty, it would also be applicable to electronic computers and would permit man to make a major step forward in using these “automata.” In principle, he reasoned, there was no reason why some day a machine might not be built which not only could perform most of the functions of the human brain but could actually reproduce itself, i.e., create more supermachines like it. He proposed to present this paper at Yale, where he had been invited to give the 1956 Silliman Lectures.

As the weeks passed, work on the paper slowed. One evening, as Von Neumann and his wife were leaving a dinner party, he complained that he was “uncertain” about walking. Doctors furnished him with a wheelchair. But Von Neumann’s world had begun to close in tight around him. He was seized by periods of overwhelming melancholy.

In April 1956 Von Neumann moved into Walter Reed Hospital for good. Honors were now coming from all directions. He was awarded Yeshiva University’s first Einstein prize. In a special White House ceremony President Eisenhower presented him with the Medal of Freedom. In April the AEC gave him the Enrico Fermi award for his contributions to the theory and design of computing machines, accompanied by a $50,000 tax-free grant.

Although born of Jewish parents, Von Neumann had never practiced Judaism. After his arrival in the U.S. he had been baptized a Roman Catholic. But his divorce from Mariette had put him beyond the sacraments of the Catholic Church for almost 19 years. Now he felt an urge to return. One morning he said to Klara, “I want to see a priest.” He added, “But he will have to be a special kind of priest, one that will be intellectually compatible.” Arrangements were made for special instructions to be given by a Catholic scholar from Washington. After a few weeks Von Neumann began once again to receive the sacraments.

The great mind falters

Toward the end of May the seizures of melancholy began to occur more frequently. In June the doctors finally announced – though not to Von Neumann himself – that the cancer had begun to spread. The great mind began to falter. “At times he would discuss history, mathematics, or automata, and he could recall word for word conversations we had had 20 years ago,” a friend says. “At other times he would scarcely recognize me.” His family – Klara, two brothers, his mother and daughter Marina – drew close around him and arranged a schedule so that one of them would always be on hand. Visitors were more carefully screened. Drugs fortunately prevented Von Neumann from experiencing pain. Now and then his old gifts of memory were again revealed. One day in the fall his brother Mike read Goethe’s Faust to him in German. Each time Mike paused to turn the page, Von Neumann recited from memory the first few lines of the following page.

One of his favorite companions was his mother Margaret von Neumann, 76 years old. In July the family in turn became concerned about her health, and it was suggested that she go to a hospital for a checkup. Two weeks later she died of cancer. “It was unbelievable,” a friend says. “She kept on going right up to the very end and never let anyone know a thing. How she must have suffered to make her son’s last days less worrisome.” Lest the news shock Von Neumann fatally, elaborate precautions were taken to keep it from him. When he guessed the truth, he suffered a severe setback.

Von Neumann’s body, which he had never given much thought to, went on serving him much longer than did his mind. Last summer the doctors had given him only three or four weeks to live. Months later, in October, his passing was again expected momentarily. But not until this month did his body give up. It was characteristic of the impatient, witty and incalculably brilliant John von Neumann that although he went on working for others until he could do not more, his own treatise on the workings of the brain – the work he thought would be his crowning achievement in his own name – was left unfinished.

von_neumann_8