Materializing Hyperbolic Spaces with Gradient-Index Optics and One-Way Mirrors

Burning Man is one week away, so I figured I would share a neat idea I’ve been hoarding  that could lead to a kick-ass Burning Man-style psychedelic art installation. If I have the time and resources to do so, I may even try to manifest this idea in real life at some point.

Around the time I was writing The Hyperbolic Geometry of DMT Experiences (cf. Eli5) I began asking myself how to help people develop a feel for what it is like to inhabit non-Euclidean phenomenal spaces. I later found out that Henry Segerman developed an immersive VR experience in which you can explore 3D hyperbolic spaces. That is fantastic, and a great step in the right direction. But I wanted to see if there was any way for us to experience 3D hyperbolic geometry in a material way without the aid of computers. Something that you could hold in your hand, like a sort of mystical amulet that works as a reminder of the vastness of the state-space of consciousness.

What I had in mind was along the lines of how we can, in a sense, visualize infinite (Euclidean) space using two parallel mirrors. I thought that maybe there could be a way to do the same but in a way that visualizes a hyperbolic space.

One-Way Mirrors and 3D Space-Filling Shapes

Right now you can use one-way mirrors on the sides of a polyhedra whose edges are embedded with LEDs to create a fascinating “infinite space effect”:

This works perfectly for cubes in particular, given that cubes are symmetrical space-filling polyhedra. But as you can see in the video above, the effect is not quite perfect when we use dodecahedra (or any other Platonic solid). The corners simply do not align properly. This is because the solid angles of non-cube Platonic solids cannot be used to cover perfectly 4π steradians (i.e. what 8 cubes do when meeting at a corner):

n-light-objects-header

This is not the case in hyperbolic space, though; arbitrary regular polyhedra can tesselate 3D hyperbolic spaces. For instance, one can use dodecahedra by choosing their size appropriately in such a way that they all have 90 degree angle corners (cf. Not Knot):

Gradient-Index Optics

Perhaps, I thought to myself, there is a way to physically realize hyperbolic curvature and enable us to see what it is like to live in a place in which dodecahedra tesselate space. I kept thinking about this problem, and one day while riding the BART and introspecting on the geometry of sound, I realized that one could use gradient-index optics to create a solid in which light-paths behave as if the space was hyperbolic.

Gradient-index optics is the subfield of optics that specializes in the use of materials that have a smooth non-constant refractive index. One way to achieve this is to blend two transparent materials (e.g. two kinds of plastic) in such a way that the concentration of each type varies smoothly from one region to the next. As a consequence, light travels in unusual and bendy ways, like this:

Materializing Hyperbolic Spaces

By carefully selecting various transparent plastics with different indices of refraction and blending them in a 3D printer in precisely the right proportions, one can in principle build solids in which the gradient-index properties of the end product instantiate a hyperbolic metric. If one were to place the material with the lowest refraction index at the very center in a dodecahedron and add materials of increasingly larger refractive indices all the way up to the corners, then the final effect could be one in which the dodecahedron has an interior in which light moves as if it were in a hyperbolic space. One can then place LED strips along the edges and seal the sides with one-way window film. Lo-and-behold, one would then quite literally be able to “hold infinity in the palm of your hand”:

dodecahedra_hyperbolic

I think that this sort of gadget would allow us to develop better intuitions for what the far-out (experiential) spaces people “visit” on psychedelics look like. One can then, in addition, generalize this to make space behave as if its 3D curvature was non-constant. One might even, perhaps, be able to visualize a black-hole by emulating its event-horizon using a region with extremely large refractive index.

6a00d8341bf7f753ef01b7c863d353970b

Challenges

I would like to conclude by considering some of the challenges that we would face trying to construct this. For instance, finding the right materials may be difficult because they would need to have a wide range of refractive indices, all be similarly transparent, able to smoothly blend with each other, and have low melting points. I am not a material scientist, but my gut feeling is that this is not currently impossible. Modern gradient-index optics already has a rather impressive level of precision.

Another challenge comes from the resolution of the 3D printer. Modern 3D printers have layers with a thickness between .2 to 0.025mm. It’s possible that this is simply not small enough to avoid visible discontinuities in the light-paths. At least in principle this could be surmounted by melting the last layer placed such that the new layer smoothly diffuses and partially blends with it in accordance with the desired hyperbolic metric.

An important caveat is that the medium in which we live (i.e. air at atmospheric pressure) is not very dense to begin with. In the example of the dodecahedra, this may represent a problem considering that the corners need to form 90 degree angles from the point of view of an outside observer. This would imply that the surrounding medium needs to have a higher refraction index than that of the transparent medium at the corners. This could be fixed by immersing the object in water or some other dense media (and designing it under the assumption of being surrounded by such a medium). Alternatively, one can simply fix the problem by using appropriately curved sides in lieu of straight planes. This may not be as aesthetically appealing, though, so it may pay off to brainstorm other clever approaches to deal with this that I haven’t thought of.

Above all, perhaps the most difficult challenge would be that of dealing with the inevitable presence of chromatic aberrations:

Since the degree to which a light-path bends in a medium depends on its frequency, how bendy light looks like with gradient-index optics is variable. If the LEDs placed at the edges of the polyhedra are white, we could expect very visible distortions and crazy rainbow patterns to emerge. This would perhaps be for the better when taken for its aesthetic value. But since the desired effect is one of actually materializing faithfully the behavior of light in hyperbolic space, this would be undesirable. The easiest way to deal with this problem would be to show the gadget in a darkened room and have only monochrome LEDs on the edges of the polyhedra whose frequency is tuned to the refractive gradient for which the metric is hyperbolic. More fancifully, it might be possible to overcome chromatic aberrations with the use of metamaterials (cf. “Metasurfaces enable improved optical lens performance“). Alas, my bedtime is approaching so I shall leave the nuts and bolts of this engineering challenge as an exercise for the reader…

The Appearance of Arbitrary Contingency to Our Diverse Qualia

By David Pearce (Mar 21, 2012; Reddit AMA)

 

The appearance of arbitrary contingency to our diverse qualia – and undiscovered state-spaces of posthuman qualia and hypothetical micro-qualia – may be illusory. Perhaps they take the phenomenal values they do as a matter of logico-mathematical necessity. I’d make this conjecture against the backdrop of some kind of zero ontology. Intuitively, there seems no reason for anything at all to exist. The fact that the multiverse exists (apparently) confounds one’s pre-reflective intuitions in the most dramatic possible way. However, this response is too quick. The cosmic cancellation of the conserved constants (mass-energy, charge, angular momentum) to zero, and the formal equivalence of zero information to all possible descriptions [the multiverse?] means we have to take seriously this kind of explanation-space. The most recent contribution to the zero-ontology genre is physicist Lawrence Krauss’s readable but frustrating “A Universe from Nothing: Why There Is Something Rather Than Nothing“. Anyhow, how does a zero ontology tie in with (micro-)qualia? Well, if the solutions to the master equation of physics do encode the field-theoretic values of micro-qualia, then perhaps their numerically encoded textures “cancel out” to zero too. To use a trippy, suspiciously New-Agey-sounding metaphor, imagine the colours of the rainbow displayed as a glorious spectrum – but on recombination cancelling out to no colour at all. Anyhow, I wouldn’t take any of this too seriously: just speculation on idle speculation. It’s tempting simply to declare the issue of our myriad qualia to be an unfathomable mystery. And perhaps it is. But mysterianism is sterile.

Open Individualism and Antinatalism: If God could be killed, it’d be dead already

Abstract

Personal identity views (closed, empty, open) serve in philosophy the role that conservation laws play in physics. They recast difficult problems in solvable terms, and by expanding our horizon of understanding, they likewise allow us to conceive of new classes of problems. In this context, we posit that philosophy of personal identity is relevant in the realm of ethics by helping us address age-old questions like whether being born is good or bad. We further explore the intersection between philosophy of personal identity and philosophy of time, and discuss the ethical implications of antinatalism in a tenseless open individualist “block-time” universe.

Introduction

Learning physics, we often find wide-reaching concepts that simplify many problems by using an underlying principle. A good example of this is the law of conservation of energy. Take for example the following high-school physics problem:

An object that weighs X kilograms falls from a height of Y meters on a planet without an atmosphere and a gravity of Zg. Calculate the velocity with which this object will hit the ground.

One could approach this problem by using Newton’s laws of motion and differentiating the distance traveled by the object as a function of time and then obtaining the velocity of the object at the time it has fallen Y meters.

Alternatively, you could simply note that given that energy is conserved, all of the potential energy of the object at a height of X meters will be transformed into kinetic energy at 0 height. Thus the velocity of the object is equivalent to this amount, and the problem is easier to solve.

Once one has learned “the trick” one starts to see many other problems differently. In turn, grasping these deep invariants opens up new horizons; while many problems that seemed impossible can be solved using these principles, it also allows you to ask new questions, which opens up new problems that cannot be solved with those principles alone.

Does this ever happen in philosophy? Perhaps entire classes of difficult problems in philosophy may become trivial (or at least tractable) once one grasps powerful principles. Such is the case, I would claim, of transcending common-sense views of personal identity.

Personal Identity: Closed, Empty, Open

In Ontological Qualia I discussed three core views about personal identity. For those who have not encountered these concepts, I recommend reading that article for an expanded discussion.

In brief:

  1. Closed Individualism: You start existing when you are born, and stop when you die.
  2. Empty Individualism: You exist as a “time-slice” or “moment of experience.”
  3. Open Individualism: There is only one subject of experience, who is everyone.

This slideshow requires JavaScript.

Most people are Closed Individualists; this is the default common sense view for good evolutionary reasons. But what grounds are there to believe in this view? Intuitively, the fact that you will wake up in “your body” tomorrow is obvious and needs no justification. However, explaining why this is the case in a clear way requires formalizing a wide range of concepts such as causality, continuity, memory, and physical laws. And when one tries to do so one will generally find a number of barriers that will prevent one from making a solid case for Closed Individualism.

As an example line of argument, one could argue that what defines you as an individual is your set of memories, and since the person who will wake up in your body tomorrow is the only human being with access to your current memories then you must be it. And while this may seem to work on the surface, a close inspection reveals otherwise. In particular, all of the following facts work against it: (1) memory is a constructive process and every time you remember something you remember it (slightly) differently, (2) memories are unreliable and do not always work at will (e.g. false memories), (3) it is unclear what happens if you copy all of your memories into someone else (do you become that person?), (4) how many memories can you swap with someone until you become a different person?, and so on. Here the more detailed questions one asks, the more ad-hoc modifications of the theory are needed. In the end, one is left with what appears to be just a set of conventional rules to determine whether two persons are the same for practical purposes. But it does not seem to carve nature at its joints; you’d be merely over-fitting the problem.

The same happens with most Closed Individualist accounts. You need to define what the identity carrier is, and after doing so one can identify situations in which identity is not well-defined given that identity carrier (memory, causality, shared matter, etc.).

But for both Open and Empty Individualism, identity is well-defined for any being in the universe. Either all are the same, or all are different. Critics might say that this is a trivial and uninteresting point, perhaps even just definitional. Closed Individualism seems sufficiently arbitrary, however, that questioning it is warranted, and once one does so it is reasonable to start the search for alternatives by taking a look at the trivial cases in which either all or none of the beings are the same.

More so, there are many arguments in favor of these views. They indeed solve and usefully reformulate a range of philosophical problems when applied diligently. I would argue that they play a role in philosophy that is similar to that of conservation of energy in physics. The energy conservation law has been empirically tested to extremely high levels of precision, which is something which we will have to do without in the realm of philosophy. Instead, we shall rely on powerful philosophical insights. And in addition, they make a lot of problems tractable and offer a powerful lens to interpret core difficulties in the field.

Open and Empty Individualism either solve or have bearings on: Decision theory, utilitarianism, fission/fusion, mind-uploading and mind-melding, panpsychism, etc. For now, let us focus on…

Antinatalism

Antinatalism is a philosophical view that posits that, all considered, it is better not to be born. Many philosophers could be adequately described as antinatalists, but perhaps the most widely recognized proponent is David Benatar. A key argument Benatar considers is that there might be an asymmetry between pleasure and pain. Granted, he would say, experiencing pleasure is good, and experiencing suffering is bad. But while “the absence of pain is good, even if that good is not enjoyed by anyone”, we also have that “the absence of pleasure is not bad unless there is somebody for whom this absence is a deprivation.” Thus, while being born can give rise to both good and bad, not being born can only be good.

Contrary to popular perception, antinatalists are not more selfish or amoral than others. On the contrary, their willingness to “bite the bullet” of a counter-intuitive but logically defensible argument is a sign of being willing to face social disapproval for a good cause. But along with the stereotype, it is generally true that antinatalists are temperamentally depressive. This, of course, does not invalidate their arguments. If anything, sometimes a degree of depressive realism is essential to arrive at truly sober views in philosophy. But it shouldn’t be a surprise to learn that either experiencing or having experienced suffering in the past predispose people to vehemently argue for the importance of its elimination. Having a direct acquaintance with the self-disclosing nastiness of suffering does give one a broader evidential base for commenting on the matter of pain and pleasure.

Antinatalism and Closed Individualism

Interestingly, Benatar’s argument, and those of many antinatalists, rely implicitly on personal identity background assumptions. In particular, antinatalism is usually framed in a way that assumes Closed Individualism.

The idea that a “person can be harmed by coming into existence” is developed within a conceptual framework in which the inhabitants of the universe are narrative beings. These beings have both spatial and temporal extension. And they also have the property that had the conditions previous to their birth been different, they might not have existed. But how many possible beings are there? How genetically or environmentally different do they need to be to be different beings? What happens if two beings merge? Or if they converge towards the same exact physical configuration over time?

 

This conceptual framework has counter-intuitive implications when taken to the extreme. For example, the amount of harm you do involves how many people you allow to be born, rather than how many years of suffering you prevented.

For the sake of the argument, imagine that you have control over a sentient-AI-enabled virtual environment in which you can make beings start existing and stop existing. Say that you create two beings, A and B, who are different in morally irrelevant ways (e.g. one likes blue more than red, but on average they both end up suffering and delighting in their experience with the same intensity). With Empty Individualism, you would consider giving A 20 years of life and not creating B vs. giving A and B 10 years of life each to be morally equivalent. But with Closed Individualism you would rightly worry that these two scenarios are completely different. By giving years of life to both A and B (any amount of life!) you have doubled the number of subjects who are affected by your decisions. If the gulf of individuality between two persons is infinite, as Closed Individualism would have it, by creating both A and B you have created two parallel realities, and that has an ontological effect on existence. It’s a big deal. Perhaps a way to put it succinctly would be: God considers much more carefully the question of whether to create a person who will live only 70 years versus whether to add a million years of life to an angel who has already lived for a very long time. Creating an entirely new soul is not to be taken lightly (incidentally, this may cast the pro-choice/pro-life debate in an entirely new light).

Thus, antinatalism is usually framed in a way that assumes Closed Individualism. The idea that a being is (possibly) harmed by coming into existence casts the possible solutions in terms of whether one should allow animals (or beings) to be born. But if one were to take an Open or Empty Individualist point of view, the question becomes entirely different. Namely, what kind of experiences should we allow to exist in the future…

Antinatalism and Empty Individualism

I think that the strongest case for antinatalism comes from a take on personal identity that is different than the implicit default (Closed Individualism). If you assume Empty Individualism, in particular, reality starts to seem a lot more horrible than you had imagined. Consider how in Empty Individualism fundamental entities exist as “moments of experience” rather than narrative streams. Therefore, every time that an animal suffers, what is actually happening is that some moments of experience get to have their whole existence in pain and suffering. In this light, one stops seeing people who suffer terrible happenings (e.g. kidney stones, schizophrenia, etc.) as people who are unlucky, and instead one sees their brains as experience machines capable of creating beings whose entire existence is extremely negative.

With Empty Individualism there is simply no way to “make it up to someone” for having had a bad experience in the past. Thus, out of compassion for the extremely negative moments of experience, one could argue that it might be reasonable to try to avoid this whole business of life altogether. That said, this imperative does not come from the asymmetry between pain and pleasure Benetar talks about (which as we saw implicitly requires Closed Individualism). In Empty Individualism it does not make sense to say that someone has been brought into existence. So antinatalism gets justified from a different angle, albeit one that might be even more powerful.

In my assessment, the mere possibility of Empty Individualism is a good reason to take antinatalism very seriously.

It is worth noting that the combination of Empty Individualism and Antinatalism has been (implicitly) discussed by Thomas Metzinger (cf. Benevolent Artificial Anti-Natalism (BAAN)) and FRI‘s Brian Tomasik.

Antinatalism and Open Individualism

Here is a Reddit post and then a comment on a related thread (by the same author) worth reading on this subject (indeed these artifacts motivated me to write the article you are currently reading):

There’s an interesting theory of personal existence making the rounds lately called Open Individualism. See herehere, and here. Basically, it claims that consciousness is like a single person in a huge interconnected library. One floor of the library contains all of your life’s experiences, and the other floors contain the experiences of others. Consciousness wanders the aisles, and each time he picks up a book he experiences whatever moment of life is recorded in it as if he were living it. Then he moves onto the next one (or any other random one on any floor) and experiences that one. In essence, the “experiencer” of all experience everywhere, across all conscious beings, is just one numerically identical subject. It only seems like we are each separate “experiencers” because it can only experience one perspective at a time, just like I can only experience one moment of my own life at a time. In actuality, we’re all the same person.

 

Anyway, there’s no evidence for this, but it solves a lot of philosophical problems apparently, and in any case there’s no evidence for the opposing view either because it’s all speculative philosophy.

 

But if this were true, and when I’m done living the life of this particular person, I will go on to live every other life from its internal perspective, it has some implications for antinatalism. All suffering is essentially experienced by the same subject, just through the lens of many different brains. There would be no substantial difference between three people suffering and three thousand people suffering, assuming their experiences don’t leave any impact or residue on the singular consciousness that experiences them. Even if all conscious life on earth were to end, there are still likely innumerable conscious beings elsewhere in the universe, and if Open Individualism is correct, I’ll just move on to experiencing those lives. And since I can re-experience them an infinite number of times, it makes no difference how many there are. In fact, even if I just experienced the same life over and over again ten thousand times, it wouldn’t be any different from experiencing ten thousand different lives in succession, as far as suffering is concerned.

 

The only way to end the experience of suffering would be to gradually elevate all conscious beings to a state of near-constant happiness through technology, or exterminate every conscious being like the Flood from the Halo series of games. But the second option couldn’t guarantee that life wouldn’t arise again in some other corner of the multiverse, and when it did, I’d be right there again as the conscious experiencer of whatever suffering it would endure.

 

I find myself drawn to Open Individualism. It’s not mysticism, it’s not a Big Soul or something we all merge with, it’s just a new way of conceptualizing what it feels like to be a person from the inside. Yet, it has these moral implications that I can’t seem to resolve. I welcome any input.

 

– “Open individualism and antinatalism” by Reddit user CrumbledFingers in r/antinatalism (March 23, 2017)

And on a different thread:

I have thought a lot about the implications of open individualism (which I will refer to as “universalism” from here on, as that’s the name coined by its earliest proponent, Arnold Zuboff) for antinatalism. In short, I think it has two major implications, one of which you mention. The first, as you say, is that freedom from conscious life is impossible. This is bad, but not as bad as it would be if I were aware of it from every perspective. As it stands, at least on Earth, only a small number of people have any inkling that they are me. So, it is not like experiencing the multitude of conscious events taking place across reality is any kind of burden that accumulates over time; from the perspective of each isolated nervous system, it will always appear that whatever is being experienced is the only thing I am experiencing. In this way, the fact that I am never truly unconscious does not have the same sting as it would to, for example, an insomniac, who is also never unconscious but must experience the constant wakefulness from one integrated perspective all the time.

 

It’s like being told that I will suffer total irreversible amnesia at some point in my future; while I can still expect to be the person that experiences all the confusion and anxiety of total amnesia when it happens, I must also acknowledge that the residue of any pains I would have experienced beforehand would be erased. Much of what makes consciousness a losing game is the persistence of stresses. Universalism doesn’t imply that any stresses will carry over between the nervous systems of individual beings, so the reality of my situation is by no means as nightmarish as eternal life in a single body (although, if there exists an immortal being somewhere in the universe, I am currently experiencing the nightmare of its life).

 

The second implication of this view for antinatalism is that one of the worst things about coming into existence, namely death, is placed in quite a different context. According to the ordinary view (sometimes called “closed” individualism), death permanently ends the conscious existence of an alienated self. Universalism says there is no alienated self that is annihilated upon the death of any particular mind. There are just moments of conscious experience that occur in various substrates across space and time, and I am the subject of all such experiences. Thus, the encroaching wall of perpetual darkness and silence that is usually an object of dread becomes less of a problem for those who have realized that they are me. Of course, this realization is not built into most people’s psychology and has to be learned, reasoned out, intellectually grasped. This is why procreation is still immoral, because even though I will not cease to exist when any specific organism dies, from the perspective of each one I will almost certainly believe otherwise, and that will always be a source of deep suffering for me. The fewer instances of this existential dread, however misplaced they may be, the better.

 

This is why it’s important to make more people understand the position of universalism/open individualism. In the future, long after the person typing this sentence has perished, my well-being will depend in large part on having the knowledge that I am every person. The earlier in each life I come to that understanding, and thus diminish the fear of dying, the better off I will be. Naturally, this project decreases in potential impact if conscious life is abundant in the universe, and in response to that problem I concede there is probably little hope, unless there are beings elsewhere in the universe that have comprehended who they are and are taking the same steps in their spheres of influence. My dream is that intelligent life eventually either snuffs itself out or discovers how to connect many nervous systems together, which would demonstrate to every connected mind that it has always belonged to one subject, has always been me, but I don’t have any reason to assume this is even possible on a physical level.

 

So, I suppose you are mostly right about one thing: there are no lucky ones that escape the badness of life’s worst agonies, either by virtue of a privileged upbringing or an instantaneous and painless demise. They and the less fortunate ones are all equally me. Yet, the horror of going through their experiences is mitigated somewhat in the details.

 

– A comment by CrumbledFingers in the Reddit post “Antinatalism and Open individualism“, also in r/antinatalism (March 12, 2017)

Our brain tries to make sense of metaphysical questions in wet-ware that shares computational space with a lot of adaptive survival programs. It does not matter if you have thick barriers (cf. thick and thin boundaries of the mind), the way you assess the value of situations as a human will tend to over-focus on whatever would allow you to go up Maslow’s hierarchy of needs (or, more cynically, achieve great feats as a testament to signal your genetic-fitness). Our motivational architecture is implemented in such a way that it is very good at handling questions like how to find food when you are hungry and how to play social games in a way that impresses others and leaves a social mark. Our brains utilize many heuristics based on personhood and narrative-streams when exploring the desirability of present options. We are people, and our brains are adapted to solve people problems. Not, as it turns out, general problems involving the entire state-space of possible conscious experiences.

Prandium Interruptus

Our brains render our inner world-simulation with flavors and textures of qualia to suit their evolutionary needs. This, in turn, impairs our ability to aptly represent scenarios that go beyond the range of normal human experiences. Let me illustrate this point with the following thought experiment:

Would you rather (a) have a 1-hour meal, or (b) have the same meal but at the half-hour point be instantly transformed into a simple, amnesic, and blank experience of perfectly neutral hedonic value that lasts ten quintillion years, and after that extremely long time of neither-happiness-nor-suffering ends, then resume the rest of the meal as if nothing had happened, with no memory of that long neutral period?

According to most utilitarian calculi these two scenarios ought to be perfectly equivalent. In both cases the total amount of positive and negative qualia is the same (the full duration of the meal) and the only difference is that the latter also contains a large amount of neutral experience too. Whether classical or negative, utilitarians should consider these experiences equivalent since they contain the same amount of pleasure and pain (note: some other ethical frameworks do distinguish between these cases, such as average and market utilitarianism).

Intuitively, however, (a) seems a lot better than (b). One imagines oneself having an awfully long experience, bored out of one’s mind, just wanting it to end, get it over with, and get back to enjoying the nice meal. But the very premise of the thought experiment presupposes that one will not be bored during that period of time, nor will one be wishing it to be over, or anything of the sort, considering that all of those are mental states of negative quality and the experience is supposed to be neutral.

Now this is of course a completely crazy thought experiment. Or is it?

The One-Electron View

In 1940 John Wheeler proposed to Richard Feynman the idea that all of reality is made of a single electron moving backwards and forwards in time, interfering with itself. This view has come to be regarded as the One-Electron Universe. Under Open Individualism, that one electron is you. From every single moment of experience to the next, you may have experienced life as a sextillion different animals, been 10^32 fleeting macroscropic entangled particles, and gotten stuck as a single non-interacting electron in the inter-galactic medium for googols of subjective years. Of course you will not remember any of this, because your memories, and indeed all of your motivational architecture and anticipation programs, are embedded in the brain you are instantiating right now. From that point of view, there is absolutely no trace of the experiences you had during this hiatus.

The above way of describing the one-electron view is still just an approximation. In order to see it fully, we also need to address the fact that there is no “natural” order to all of these different experiences. Every way of factorizing it and describing the history of the universe as “this happened before this happened” and “this, now that” could be equally inapplicable from the point of view of fundamental reality.

Philosophy of Time

17496270_10208752190872647_1451187529_n-640x340

Presentism is the view that only the present moment is real. The future and the past are just conceptual constructs useful to navigate the world, but not actual places that exist. The “past exists as footprints”, in a matter of speaking. “Footprints of the past” are just strangely-shaped information-containing regions of the present, including your memories. Likewise, the “future” is unrealized: a helpful abstraction which evolution gave us to survive in this world.

On the other hand, eternalism treats the future and the past as always-actualized always-real landscapes of reality. Every point in space-time is equally real. Physically, this view tends to be brought up in connection with the theory of relativity, where frame-invariant descriptions of the space-time continuum have no absolute present line. For a compelling physical case, see the Rietdijk-Putnam argument.

Eternalism has been explored in literature and spirituality extensively. To name a few artifacts: The EggHindu and Buddhist philosophy, the videos of Bob Sanders (cf. The Gap in Time, The Complexity of Time), the essays of Philip K. Dick and J. L. Borges, the poetry of T. S. Eliot, the fiction of Kurt Vonnegut Jr (TimequakeSlaughterhouse Five, etc.), and the graphic novels of Alan Moore, such as Watchmen:

Let me know in the comments if you know of any other work of fiction that explores this theme. In particular, I would love to assemble a comprehensive list of literature that explores Open Individualism and Eternalism.

Personal Identity and Eternalism

For the time being (no pun intended), let us assume that Eternalism is correct. How do Eternalism and personal identity interact? Doctor Manhattan in the above images (taken from Watchmen) exemplifies what it would be like to be a Closed Individualist Eternalist. He seems to be aware of his entire timeline at once, yet recognizes his unique identity apart from others. That said, as explained above, Closed Individualism is a distinctly unphysical theory of identity. One would thus expect of Doctor Manhattan, given his physically-grounded understanding of reality, to espouse a different theory of identity.

A philosophy that pairs Empty Individualism with Eternalism is the stuff of nightmares. Not only would we have, as with Empty Individualism alone, that some beings happen to exist entirely as beings of pain. We would also have that such unfortunate moments of experience are stuck in time. Like insects in amber, their expressions of horror and their urgency to run away from pain and suffering are forever crystallized in their corresponding spatiotemporal coordinates. I personally find this view paralyzing and sickening, though I am aware that such a reaction is not adaptive for the abolitionist project. Namely, even if “Eternalism + Empty Individualism” is a true account of reality, one ought not to be so frightened by it that one becomes incapable of working towards preventing future suffering. In this light, I adopt the attitude of “hope for the best, plan for the worst”.

Lastly, if Open Individualism and Eternalism are both true (as I suspect is the case), we would be in for what amounts to an incredibly trippy picture of reality. We are all one timeless spatiotemporal crystal. But why does this eternal crystal -who is everyone- exist? Here the one-electron view and the question “why does anything exist?” could both be simultaneously addressed with a single logico-physical principle. Namely, that the sum-total of existence contains no information to speak of. This is what David Pearce calls “Zero Ontology” (see: 1, 2, 3, 4). What you and I are, in the final analysis, is the necessary implication of there being no information; we are all a singular pattern of self-interference whose ultimate nature amounts to a dimensionless unit-sphere in Hilbert space. But this is a story for another post.

On a more grounded note, Scientific American recently ran an article that could be placed in this category of Open Individualism and Eternalism. In it the authors argue that the physical signatures of multiple-personality disorder, which explain the absence of phenomenal binding between alters that share the same brain, could be extended to explain why reality is both one and yet appears as the many. We are, in this view, all alters of the universe.

Personal Identity X Philosophy of Time X Antinatalism

Sober, scientifically grounded, and philosophically rigorous accounts of the awfulness of reality are rare. On the one hand, temperamentally happy individuals are more likely to think about the possibilities of heaven that lie ahead of us, and their heightened positive mood will likewise make them more likely to report on their findings. Temperamental depressives, on the other hand, may both investigate reality with less motivated reasoning than the euthymic and also be less likely to report on the results due to their subdued mood (“why even try? why even bother to write about it?”). Suffering in the Multiverse by David Pearce is a notable exception to this pattern. David’s essay highlights that if Eternalism is true together with Empty Individualism, there are vast regions of the multiverse filled with suffering that we can simply do nothing about (“Everett Hell Branches”). Taken together with a negative utilitarian ethic, this represents a calamity of (quite literally) astronomical proportions. And, sadly, there simply is no off-button to the multiverse as a whole. The suffering is/has/will always be there. And this means that the best we can do is to avoid the suffering of those beings in our forward-light cone (a drop relative to the size of the ocean of existence). The only hope left is to find a loop-hole in quantum mechanics that allows us to cross into other Everett branches of the multiverse and launch cosmic rescue missions. A counsel of despair or a rational prospect? Only time will tell.

Another key author that explores the intersection of these views is Mario Montano (see: Eternalism and Its Ethical Implications and The Savior Imperative).

A key point that both of these authors make is that however nasty reality might be, ethical antinatalists and negative utilitarians shouldn’t hold their breath about the possibility that reality can be destroyed. In Open Individualism plus Eternalism, the light of consciousness (perhaps what some might call the secular version of God) simply is, everywhere and eternally. If reality could be destroyed, such destruction is certainly limited to our forward light-cone. And unlike Closed Individualist accounts, it is not possible to help anyone by preventing their birth; the one subject of existence has already been born, and will never be unborn, so to speak.

Nor should ethical antinatalists and negative utilitarians think that avoiding having kids is in any way contributing to the cause of reducing suffering. It is reasonable to assume that the personality traits of agreeableness (specifically care and compassion), openness to experience, and high levels of systematizing intelligence are all over-represented among antinatalists. Insofar as these traits are needed to build a good future, antinatalists should in fact be some of the people who reproduce the most. Mario Montano says:

Hanson calls the era we live in the “dream time” since it’s evolutionarily unusual for any species to be wealthy enough to have any values beyond “survive and reproduce.” However, from an anthropic perspective in infinite dimensional Hilbert space, you won’t have any values beyond “survive and reproduce.” The you which survives will not be the one with exotic values of radical compassion for all existence that caused you to commit peaceful suicide. That memetic stream weeded himself out and your consciousness is cast to a different narrative orbit which wants to survive and reproduce his mind. Eventually. Wanting is, more often than not, a precondition for successfully attaining the object of want.

Physicalism Implies Existence Never Dies

Also, from the same essay:

Anti-natalists full of weeping benignity are literally not successful replicators. The Will to Power is life itself. It is consciousness itself. And it will be, when a superintelligent coercive singleton swallows superclusters of baryonic matter and then spreads them as the flaming word into the unconverted future light cone.

[…]

You eventually love existence. Because if you don’t, something which does swallows you, and it is that which survives.

I would argue that the above reasoning is not entirely correct in the large scheme of things*, but it is certainly applicable in the context of human-like minds and agents. See also: David Pearce’s similar criticisms to antinatalism as a policy.

This should underscore the fact that in its current guise, antinatalism is completely self-limiting. Worryingly, one could imagine an organized contingent of antinatalists conducting research on how to destroy life as efficiently as possible. Antinatalists are generally very smart, and if Eliezer Yudkowsky‘s claim that “every 18 months the minimum IQ necessary to destroy the world drops by one point” is true, we may be in for some trouble. Both Pearce’s, Montano’s, and my take is that even if something akin to negative utilitarianism is the case, we should still pursue the goal of diminishing suffering in as peaceful of a way as it is possible. The risk of trying to painlessly destroy the world and failing to do so might turn out to be ethically catastrophic. A much better bet would be, we claim, to work towards the elimination of suffering by developing commercially successful hedonic recalibration technology. This also has the benefit that both depressives and life-lovers will want to team up with you; indeed, the promise of super-human bliss can be extraordinarily motivating to people who already lead happy lives, whereas the prospect of achieving “at best nothing” sounds stale and uninviting (if not outright antagonistic) to them.

An Evolutionary Environment Set Up For Success

If we want to create a world free from suffering, we will have to contend with the fact that suffering is adaptive in certain environments. The solution here is to avoid such environments, and foster ecosystems of mind that give an evolutionary advantage to the super-happy. More so, we already have the basic ingredients to do so. In Wireheading Done Right I discussed how, right now, the economy is based on trading three core goods: (1) survival tools, (2) power, and (3) information about the state-space of consciousness. Thankfully, the world right now is populated by humans who largely choose to spend their extra income on fun rather than on trips to the sperm bank. In other words, people are willing to trade some of their expected reproductive success for good experiences. This is good because it allows the existence of an economy of information about the state-space of consciousness, and thus creates an evolutionary advantage for caring about consciousness and being good at navigating its state-space. But for this to be sustainable, we will need to find the way to make positive valence gradients (i.e. gradients of bliss) both economically useful and power-granting. Otherwise, I would argue, the part of the economy that is dedicated to trading information about the state-space of consciousness is bound to be displaced by the other two (i.e. survival and power). For a more detailed discussion on these questions see: Consciousness vs. Pure Replicators.

12565637_1182612875090077_9123676868545012453_n

Can we make the benevolent exploration of the state-space of consciousness evolutionarily advantageous?

In conclusion, to close down hell (to the extent that is physically possible), we need to take advantage of the resources and opportunities granted to us by merely living in Hanson’s “dream time” (cf. Age of Spandrels). This includes the fact that right now people are willing to spend money on new experiences (especially if novel and containing positive valence), and the fact that philosophy of personal identity can still persuade people to work towards the wellbeing of all sentient beings. In particular, scientifically-grounded arguments in favor of both Open and Empty Individualism weaken people’s sense of self and make them more receptive to care about others, regardless of their genetic relatedness. On its natural course, however, this tendency may ultimately be removed by natural selection: if those who are immune to philosophy are more likely to maximize their inclusive fitness, humanity may devolve into philosophical deafness. The solution here is to identify the ways in which philosophical clarity can help us overcome coordination problems, highlight natural ethical Schelling points, and ultimately allow us to summon a benevolent super-organism to carry forward the abolition of as much suffering as is physically possible.

And only once we have done everything in our power to close down hell in all of its guises, will we be able to enjoy the rest of our forward light-cone in good conscience. Till then, us ethically-minded folks shall relentlessly work on building universe-sized fire-extinguishers to put out the fire of Hell.


* This is for several reasons: (1) phenomenal binding is not epiphenomenal, (2) the most optimal computational valence gradients are not necessarily located on the positive side, sadly, and (3) wanting, liking, and learning are possible to disentangle.

John von Neumann

Passing of a Great Mind

John von Neumann, a Brilliant, Jovial Mathematician, was a Prodigious Servant of Science and his Country

by Clary Blair Jr. – Life Magazine (February 25th, 1957)

The world lost one of its greatest scientists when Professor John von Neumann, 54, died this month of cancer in Washington, D.C. His death, like his life’s work, passed almost unnoticed by the public. But scientists throughout the free world regarded it as a tragic loss. They knew that Von Neumann’s brilliant mind had not only advanced his own special field, pure mathematics, but had also helped put the West in an immeasurably stronger position in the nuclear arms race. Before he was 30 he had established himself as one of the world’s foremost mathematicians. In World War II he was the principal discoverer of the implosion method, the secret of the atomic bomb.

The government officials and scientists who attended the requiem mass at the Walter Reed Hospital chapel last week were there not merely in recognition of his vast contributions to science, but also to pay personal tribute to a warm and delightful personality and a selfless servant of his country.

For more than a year Von Neumann had known he was going to die. But until the illness was far advanced he continued to devote himself to serving the government as a member of the Atomic Energy Commission, to which he was appointed in 1954. A telephone by his bed connected directly with his EAC office. On several occasions he was taken downtown in a limousine to attend commission meetings in a wheelchair. At Walter Reed, where he was moved early last spring, an Air Force officer, Lieut. Colonel Vincent Ford, worked full time assisting him. Eight airmen, all cleared for top secret material, were assigned to help on a 24-hour basis. His work for the Air Force and other government departments continued. Cabinet members and military officials continually came for his advice, and on one occasion Secretary of Defence Charles Wilson, Air Force Secretary Donald Quarles and most of the top Air Force brass gathered in Von Neumann’s suite to consult his judgement while there was still time. So relentlessly did Von Neumann pursue his official duties that he risked neglecting the treatise which was to form the capstone of his work on the scientific specialty, computing machines, to which he had devoted many recent years.

von_neumann_1_1

His fellow scientists, however, did not need any further evidence of Von Neumann’s rank as a scientist – or his assured place in history. They knew that during World War II at Los Alamos Von Neumann’s development of the idea of implosion speeded up the making of the atomic bomb by at least a full year. His later work with electronic computers quickened U.S. development of the H-bomb by months. The chief designer of the H-bomb, Edward Teller, once said with wry humor that Von Neumann was “one of those rare mathematicians who could descend to the level of the physicist.” Many theoretical physicists admit that they learned more from Von Neumann in methods of scientific thinking than from any of their colleagues. Hans Bethe, who was director of the theoretical physics division at Los Alamos, says, “I have sometimes wondered whether a brain like Von Neumann’s does not indicate a species superior to that of man.”

von_neumann_2

The foremost authority on computing machines in the U.S., Von Neumann was more than anyone else responsible for the increased use of the electronic “brains” in government and industry. The machine he called MANIAC (mathematical analyzer, numerical integrator and computer), which he built at the Institute for Advanced Study in Princeton, N.J., was the prototype for most of the advanced calculating machines now in use. Another machine, NORC, which he built for the Navy, can deliver a full day’s weather prediction in a few minutes. The principal adviser to the U.S. Air Force on nuclear weapons, Von Neumann was the most influential scientific force behind the U.S. decision to embark on accelerated production of intercontinental ballistic missiles. His “theory of games,” outlined in a book which he published in 1944 in collaboration with Economist Oskar Morgenstern, opened up an entirely new branch of mathematics. Analyzing the mathematical probabilities behind games of chance, Von Neumann went on to formulate a mathematical approach to such widespread fields as economics, sociology and even military strategy. His contributions to the quantum theory, the theory which explains the emission and absorption of energy in atoms and the one on which all atomic and nuclear physics are based, were set forth in a work entitled Mathematical Foundations of Quantum Mechanics which he wrote at the age of 23. It is today one of the cornerstones of this highly specialized branch of mathematical thought.

For Von Neumann the road to success was a many-laned highway with little traffic and no speed limit. He was born in 1903 in Budapest and was of the same generation of Hungarian physicists as Edward Teller, Leo Szilard and Eugene Wigner, all of whom later worked on atomic energy development for the U.S.

The eldest of three sons of a well-to-do Jewish financier who had been decorated by the Emperor Franz Josef, John von Neumann grew up in a society which placed a premium on intellectual achievement. At the age of 6 he was able to divide two eight-digit numbers in his head. By the age of 8 he had mastered college calculus and as a trick could memorize on sight a column in a telephone book and repeat back the names, addresses and numbers. History was only a “hobby,” but by the outbreak of World War I, when he was 10, his photographic mind had absorbed most of the contents of the 46-volume works edited by the German historian Oncken with a sophistication that startled his elders.

Despite his obvious technical ability, as a young man Von Neumann wanted to follow his father’s financial career, but he was soon dissuaded. Under a kind of supertutor, a first-rank mathematician at the University of Budapest named Leopold Fejer, Von Neumann was steered into the academic world. At 21 he received two degrees – one in chemical engineering at Zurich and a PhD in mathematics from the University of Budapest. The following year, 1926, as Admiral Horthy’s rightist regime had been repressing Hungarian Jews, he moved to Göttingen, Germany, then the mathematical center of the world. It was there that he published his major work on quantum mechanics.

The young professor

His fame now spreading, Von Neumann at 23 qualified as a Privatdozent (lecturer) at the University of Berlin, one of the youngest in the school’s history. But the Nazis had already begun their march to power. In 1929 Von Neumann accepted a visiting lectureship at Princeton University and in 1930, at the age of 26, he took a job there as professor of mathematical physics – after a quick trip to Budapest to marry a vivacious 18-year-old named Mariette Kovesi. Three years later, when the Institute for Advanced Study was founded at Princeton, Von Neumann was appointed – as was Albert Einstein – to be one of its first full professors. “He was so young,” a member of the institute recalls, “that most people who saw him in the halls mistook him for a graduate student.”

von_neumann_3

Although they worked near each other in the same building, Einstein and Von Neumann were not intimate, and because their approach to scientific matters was different they never formally collaborated. A member of the institute who worked side by side with both men in the early days recalls, “Einstein’s mind was slow and contemplative. He would think about something for years. Johnny’s mind was just the opposite. It was lightning quick – stunningly fast. If you gave him a problem he either solved it right away or not at all. If he had to think about it a long time and it bored him, hist interest would begin to wander. And Johnny’s mind would not shine unless whatever he was working on had his undivided attention.” But the problems he did care about, such as his “theory of games,” absorbed him for much longer periods.

‘Proof by erasure’

Partly because of this quicksilver quality Von Neumann was not an outstanding teacher to many of his students. But for the advanced students who could ascend to his level he was inspirational. His lectures were brilliant, although at times difficult to follow because of his way of erasing and rewriting dozens of formulae on the blackboard. In explaining mathematical problems Von Neumann would write his equations hurriedly, starting at the top of the blackboard and working down. When he reached the bottom, if the problem was unfinished, he would erase the top equations and start down again. By the time he had done this two or three times most other mathematicians would find themselves unable to keep track. On one such occasion a colleague at Princeton waited until Von Neumann had finished and said, “I see. Proof by erasure.”

Von Neumann himself was perpetually interested in many fields unrelated to science. Several years ago his wife gave him a 21-volume Cambridge History set, and she is sure he memorized every name and fact in the books. “He is a major expert on all the royal family trees in Europe,” a friend said once. “He can tell you who fell in love with whom, and why, what obscure cousin this or that czar married, how many illegitimate children he had and so on.” One night during the Princeton days a world-famous expert on Byzantine history came to the Von Neumann house for a party. “Johnny and the professor got into a corner and began discussing some obscure facet,” recalls a friend who was there. “Then an argument arose over a date. Johnny insisted it was this, the professor that. So Johnny said, ‘Let’s get the book.’ They looked it up and Johnny was right. A few weeks later the professor was invited to the Von Neumann house again. He called Mrs. von Neumann and said jokingly, ‘I’ll come if Johnny promises not to discuss Byzantine history. Everybody thinks I am the world’s greatest expert in it and I want them to keep on thinking that.'”von_neumann_4

Once a friend showed him an extremely complex problem and remarked that a certain famous mathematician had taken a whole week’s journey across Russia on the Trans-Siberian Railroad to complete it. Rushing for a train, Von Neumann took the problem along. Two days later the friend received an air-mail packet from Chicago. In it was a 50-page handwritten solution to the problem. Von Neumann had added a postscript: “Running time to Chicago: 15 hours, 26 minutes.” To Von Neumann this was not an expression of vanity but of sheer delight – a hole in one.

During periods of intense intellectual concentration Von Neumann, like most of his professional colleagues, was lost in preoccupation, and the real world spun past him. He would sometimes interrupt a trip to put through a telephone call to find out why he had taken the trip in the first place.

Von Neumann believed that concentration alone was insufficient for solving some of the most difficult mathematical problems and that these are solved in the subconscious. He would often go to sleep with a problem unsolved, wake up in the morning and scribble the answer on a pad he kept on the bedside table. It was a common occurrence for him to begin scribbling with pencil and paper in the midst of a nightclub floor show or a lively party, “the noisier,” his wife says, “the better.” When his wife arranged a secluded study for Von Neumann on the third floor of the Princeton home, Von Neumann was furious. “He stormed downstairs,” says Mrs. von Neumann, “and demanded, ‘What are you trying to do, keep me away from what’s going on?’; After that he did most of his work in the living room with my phonograph blaring.”

His pride in his brain power made him easy prey to scientific jokesters. A friend once spent a week working out various steps in an obscure mathematical process. Accosting Von Neumann at a party he asked for help in solving the problem. After listening to it, Von Neumann leaned his plump frame against a door and stared blankly, his mind going through the necessary calculations. At each step in the process the friend would quickly put in, “Well, it comes out to this, doesn’t it?” After several such interruptions Von Neumann became perturbed and when his friend “beat” him to the final answer he exploded in fury. “Johnny sulked for weeks,” recalls the friend, “before he found out it was all a joke.”

He did not look like a professor. He dressed so much like a Wall Street banker that a fellow scientist once said, “Johnny, why don’t you smear some chalk dust on your coat so you look like the rest of us?” He loved to eat, especially rich sauces and desserts, and in later years was forced to diet rigidly. To him exercise was “nonsense.”

Those lively Von Neumann parties

Most card-playing bored him, although he was fascinated by the mathematical probabilities involved in poker and baccarat. He never cared for movies. “Every time we went,” his wife recalls, “he would either go to sleep or do math problems in his head.” When he could do neither he would break into violent coughing spells. What he truly loved, aside from work, was a good party. Residents of Princeton’s quiet academic community can still recall the lively goings-on at the Von Neumann’s big, rambling house on Westcott Road. “Those old geniuses got downright approachable at the Von Neumanns’,” a friend recalls. Von Neumann’s talents as a host were based on his drinks, which were strong, his repertoire of off-color limericks, which was massive, and his social ease, which was consummate. Although he could rarely remember a name, Von Neumann would escort each new guest around the room, bowing punctiliously to cover up the fact that he was not using names in introducing people.von_neumann_5

Von Neumann also had a passion for automobiles, not for tinkering with them but for driving them as if they were heavy tanks. He turned up with a new one every year at Princeton. “The way he drove, a car couldn’t possibly last more than a year,” a friend says. Von Neumann was regularly arrested for speeding and some of his wrecks became legendary. A Princeton crossroads was for a while known as “Von Neumann corner” because of the number of times the mathematician had cracked up there. He once emerged from a totally demolished car with this explanation: “I was proceeding down the road. The threes on the right were passing me in orderly fashion at 60 miles an hour. Suddenly one of them stepped out in my path. Boom!”

Mariette and John von Neumann had one child, Marina, born in 1935, who graduated from Radcliffe last June, summa cum laude, with the highest scholastic record in her class. In 1937, the year Von Neumann was elected to the National Academy of Sciences and became a naturalized citizen of the U.S., the marriage ended in divorce. The following year on a trip to Budapest he met and married Klara Dan, whom he subsequently trained to be an expert on electronic computing machines. The Von Neumann home in Princeton continued to be a center of gaiety as well as a hotel for prominent intellectual transients.

In the late 1930s Von Neumann began to receive a new type of visitor at Princeton: the military scientist and engineer. After he had handled a number of jobs for the Navy in ballistics and anti-submarine warfare, word of his talents spread, and Army Ordnance began using him more and more as a consultant at its Aberdeen Proving Ground in Maryland. As war drew nearer this kind of work took up more and more of his time.

During World War II he roved between Washington, where he had established a temporary residence, England, Los Alamos and other defense installations. When scientific groups heard Von Neumann was coming, they would set up all of their advanced mathematical problems like ducks in a shooting gallery. Then he would arrive and systematically topple them over.

After the Axis had been destroyed, Von Neumann urged that the U.S. immediately build even more powerful atomic weapons and use them before the Soviets could develop nuclear weapons of their own. It was not an emotional crusade, Von Neumann, like others, had coldly reasoned that the world had grown too small to permit nations to conduct their affairs independently of one another. He held that world government was inevitable – and the sooner the better. But he also believed it could never be established while Soviet Communism dominated half of the globe. A famous Von Neumann observation at the time: “With the Russians it is not a question of whether but when.” A hard-boiled strategist, he was one of the few scientists to advocate preventive war, and in 1950 he was remarking, “If you say why not bomb them tomorrow, I say why not today? If you say today at 5 o’clock, I say why not 1 o’clock?”von_neumann_6

In late 1949, after the Russians had exploded their first atomic bomb and the U.S. scientific community was split over whether or not the U.S. should build a hydrogen bomb, Von Neumann reduced the argument to: “It is not a question of whether we build it or not, but when do we start calculating?” When the H-bomb controversy raged, Von Neumann slipped quietly out to Los Alamos, took a desk and began work on the first mathematical steps toward building the weapon, specifically deciding which computations would be fed to which electronic computers.

Von Neumann’s principal interest in the postwar years was electronic computing machines, and his advice on computers was in demand almost everywhere. One day he was urgently summoned to the offices of the Rand Corporation, a government-sponsored scientific research organization in Santa Monica, Calif. Rand scientists had come up with a problem so complex that the electronic computers then in existence seemingly could not handle it. The scientists wanted Von Neumann to invent a new kind of computer. After listening to the scientists expound, Von Neumann broke in: “Well, gentlemen, suppose you tell me exactly what the problem is?”

For the next two hours the men at Rand lectured, scribbled on blackboards, and brought charts and tables back and forth. Von Neumann sat with his head buried in his hands. When the presentation was completed, he scribbled on a pad, stared so blankly that a Rand scientist later said he looked as if “his mind had slipped his face out of gear,” then said, “Gentlemen, you do not need the computer. I have the answer.”

While the scientists sat in stunned silence, Von Neumann reeled off the various steps which would provide the solution to the problem. Having risen to this routine challenge, Von Neumann followed up with a routine suggestion: “Let’s go to lunch.”

In 1954, when the U.S. development of the intercontinental ballistic missile was dangerously bogged down, study groups under Von Neumann’s direction began paving the way for solution of the most baffling problems: guidance, miniaturization of components, heat resistance. In less than a year Von Neumann put his O.K. on the project – but not until he had completed a relentless investigation in his own dazzlingly fast style. One day, during an ICBM meeting on the West Coast, a physicist employed by an aircraft company approached Von Neumann with a detailed plan for one phase of the project. It consisted of a tome several hundred pages long on which the physicist had worked for eight months. Von Neumann took the book and flipped through the first several pages. Then he turned it over and began reading from back to front. He jotted down a figure on a pad, then a second and a third. He looked out the window for several seconds, returned the book to the physicist and said, “It won’t work.” The physicist returned to his company. After two months of re-evaluation, he came to the same conclusion.von_neumann_7

In October 1954 Eisenhower appointed Von Neumann to the Atomic Energy Commission. Von Neumann accepted, although the Air Force and the senators who confirmed him insisted that he retain his chairmanship of the Air Force ballistic missile panel.

Von Neumann had been on the new job only six months when the pain first stuck in the left shoulder. After two examinations, the physicians at Bethesda Naval Hospital suspected cancer. Within a month Von Neumann was wheeled into surgery at the New England Deaconess Hospital in Boston. A leading pathologist, Dr. Shields Warren, examined the biopsy tissue and confirmed that the pain was a secondary cancer. Doctors began to race to discover the primary location. Several weeks later they found it in the prostate. Von Neumann, they agreed, did not have long to live.

When he heard the news Von Neumann called for Dr. Warren. He asked, “Now that this thing has come, how shall I spend the remainder of my life?”

“Well, Johnny,” Warren said, “I would stay with the commission as long as you feel up to it. But at the same time I would say that if you have any important scientific papers – anything further scientifically to say – I would get started on it right away.”

Von Neumann returned to Washington and resumed his busy schedule at the Atomic Energy Commission. To those who asked about his arm, which was in a sling, he muttered something about a broken collarbone. He continued to preside over the ballistic missile committee, and to receive an unending stream of visitors from Los Alamos, Livermore, the Rand Corporation, Princeton. Most of these men knew that Von Neumann was dying of cancer, but the subject was never mentioned.

Machines creating new machines

After the last visitor had departed Von Neumann would retire to his second-floor study to work on the paper which he knew would be his last contribution to science. It was an attempt to formulate a concept shedding new light on the workings of the human brain. He believed that if such a concept could be stated with certainty, it would also be applicable to electronic computers and would permit man to make a major step forward in using these “automata.” In principle, he reasoned, there was no reason why some day a machine might not be built which not only could perform most of the functions of the human brain but could actually reproduce itself, i.e., create more supermachines like it. He proposed to present this paper at Yale, where he had been invited to give the 1956 Silliman Lectures.

As the weeks passed, work on the paper slowed. One evening, as Von Neumann and his wife were leaving a dinner party, he complained that he was “uncertain” about walking. Doctors furnished him with a wheelchair. But Von Neumann’s world had begun to close in tight around him. He was seized by periods of overwhelming melancholy.

In April 1956 Von Neumann moved into Walter Reed Hospital for good. Honors were now coming from all directions. He was awarded Yeshiva University’s first Einstein prize. In a special White House ceremony President Eisenhower presented him with the Medal of Freedom. In April the AEC gave him the Enrico Fermi award for his contributions to the theory and design of computing machines, accompanied by a $50,000 tax-free grant.

Although born of Jewish parents, Von Neumann had never practiced Judaism. After his arrival in the U.S. he had been baptized a Roman Catholic. But his divorce from Mariette had put him beyond the sacraments of the Catholic Church for almost 19 years. Now he felt an urge to return. One morning he said to Klara, “I want to see a priest.” He added, “But he will have to be a special kind of priest, one that will be intellectually compatible.” Arrangements were made for special instructions to be given by a Catholic scholar from Washington. After a few weeks Von Neumann began once again to receive the sacraments.

The great mind falters

Toward the end of May the seizures of melancholy began to occur more frequently. In June the doctors finally announced – though not to Von Neumann himself – that the cancer had begun to spread. The great mind began to falter. “At times he would discuss history, mathematics, or automata, and he could recall word for word conversations we had had 20 years ago,” a friend says. “At other times he would scarcely recognize me.” His family – Klara, two brothers, his mother and daughter Marina – drew close around him and arranged a schedule so that one of them would always be on hand. Visitors were more carefully screened. Drugs fortunately prevented Von Neumann from experiencing pain. Now and then his old gifts of memory were again revealed. One day in the fall his brother Mike read Goethe’s Faust to him in German. Each time Mike paused to turn the page, Von Neumann recited from memory the first few lines of the following page.

One of his favorite companions was his mother Margaret von Neumann, 76 years old. In July the family in turn became concerned about her health, and it was suggested that she go to a hospital for a checkup. Two weeks later she died of cancer. “It was unbelievable,” a friend says. “She kept on going right up to the very end and never let anyone know a thing. How she must have suffered to make her son’s last days less worrisome.” Lest the news shock Von Neumann fatally, elaborate precautions were taken to keep it from him. When he guessed the truth, he suffered a severe setback.

Von Neumann’s body, which he had never given much thought to, went on serving him much longer than did his mind. Last summer the doctors had given him only three or four weeks to live. Months later, in October, his passing was again expected momentarily. But not until this month did his body give up. It was characteristic of the impatient, witty and incalculably brilliant John von Neumann that although he went on working for others until he could do not more, his own treatise on the workings of the brain – the work he thought would be his crowning achievement in his own name – was left unfinished.

von_neumann_8

 

 

Qualia Formalism in the Water Supply: Reflections on The Science of Consciousness 2018

Two years ago I attended The Science of Consciousness 2016 (in Tucson, AZ.) with David Pearce. Here is my account of that event. This year I went again, but now together with a small contingent representing the Qualia Research Institute (QRI). You can see the videos of our presentations here. Below you will find this year’s writeup:

What Went Great

(1) The Meta-Problem of Consciousness

This time David Chalmers brought the Meta-problem of Consciousness into the overall conversation by making a presentation about his paper on the topic. I think that this was a great addition to the conference, and it played beautifully as a tone-setter.

“The meta-problem of consciousness is (to a first approximation) the problem of explaining why we think that there is a problem of consciousness.”

– Chalmers on the Meta-Problem

David Chalmers is famous for defending the case that there is a problem of consciousness. And not only that, but that indeed, an aspect of it, the hard problem, resists conventional methods of explanation (as they focus on form and structure, but consciousness is anything but). Chalmers’ track record of contributions to the field is impressive. His work includes: formalizing foundational problems of consciousness, steel-manning extended-mind/embodied cognition, progress on classical philosophy of language questions (e.g. sense and reference with regards to modal logic), observations on the unity of consciousness, the case for the possibility of super-intelligence, and even the philosophical implications of Virtual Reality (I often link to his Reddit AMA as one of the best layman’s introductions to his work; see also his views on psychedelics). Plus, his willingness to consider, and even steel-man the opponent’s arguments is admirable.*

And of all of his works, I would argue, discussing the meta-problem of consciousness is perhaps one of the things that will help advance the field of consciousness research the most. In brief, we are in sore need of an agreed-upon explanation for the reasons why consciousness poses a problem at all. Rather than getting caught up in unfruitful arguments at the top of the argumentative tree, it is helpful to sometimes be directed to look at the roots of people’s divergent intuitions. This tends to highlight unexpected differences in people’s philosophical background assumptions.

And the fact that these background assumptions are often not specified leads to problems. For example: talking past each other due to differences in terminology, people attacking a chain of reasoning when in fact their disagreement starts at the level of ontology, and failure to recognize and map useful argumentative isomorphisms from one ontology onto another.

Having the Meta-Problem of Consciousness at the forefront of the discussions, in my appraisal of the event, turned out to be very generative. Asking an epiphenomenalist, an eliminativist, a panprotopsychist, etc. to explain why they think their view is true seemed less helpful in advancing the state of our collective knowledge than asking them about their thoughts on the Meta-Problem of Consciousness.

(2) Qualia Formalism in the Water Supply

At the Qualia Research Institute we explicitly assume that consciousness is not only real, but that it is formalizable. This is not a high-level claim about the fact that we can come up with a precise vocabulary to talk about consciousness. It is a radical take on the degree to which formal mathematical models of experience can be discovered. Qualia Formalism, as we define it, is the claim that for any conscious experience, there exists a mathematical object whose properties are isomorphic to the phenomenology of that experience. Anti-formalists, on the other hand, might say that consciousness is an improper reification.

For formalists, consciousness is akin to electromagnetism: we started out with a series of peculiar disparate phenomena such as lightning, electricity, magnets, static-electricity, etc. After a lot of work, it turned out that all of these diverse phenomena had a crisp unifying mathematical underpinning. More so, this formalism was not merely descriptive. Light, among other phenomena, were hidden in it. That is, finding a mathematical formalism for real phenomena can be generalizable to even more domains, be strongly informative for ontology, and ultimately, also technologically generative (the computer you are using to read this article wouldn’t- and in fact couldn’t -exist if electromagnetism wasn’t formalizable).

For anti-formalists, consciousness is akin to Élan vital. People had formed the incorrect impression that explaining life necessitated a new ontology. That life was, in some sense, (much) more than just the sum of life-less forces in complex arrangements. And in order to account for the diverse (apparently unphysical) behaviors of life, we needed a life force. Yet no matter how hard biologists, chemists, and physicists have tried to look for it, no life force has been found. As of 2018 it is widely agreed by scientists that life can be reduced to complex biochemical interactions. In the same vein, anti-formalists about consciousness would argue that people are making a category error when they try to explain consciousness itself. Consciousness will go the same way as Élan vital: it will turn out to be an improper reification.

In particular, the new concept-handle on the block to refer to anti-formalist views of consciousness is “illusionism”. Chalmers writes on The Meta-Problem of Consciousness:

This strategy [of talking about the meta-problem] typically involves what Keith Frankish has called illusionism about consciousness: the view that consciousness is or involves a sort of introspective illusion. Frankish calls the problem of explaining the illusion of consciousness the illusion problem. The illusion problem is a close relative of the meta-problem: it is the version of the meta-problem that arises if one adds the thesis that consciousness is an illusion. Illusionists (who include philosophers such as Daniel Dennett, Frankish, and Derk Pereboom, and scientists such as Michael Graziano and Nicholas Humphrey) typically hold that a solution to the meta-problem will itself solve or dissolve the hard problem.

The Meta-Problem of Consciousness (pages 2-3)

In the broader academic domain, it seems that most scientists and philosophers are neither explicitly formalists nor anti-formalists. The problem is, this question has not been widely discussed. We at QRI believe that there is a fork in the road ahead of us. That while both formalist and anti-formalist views are defensible, there is very little room in-between for coherent theories of consciousness. The problem of whether qualia formalism is correct or not is what Michael Johnson has coined as The Real Problem of Consciousness. Solving it would lead to radical improvements in our understanding of consciousness.

282cu8

What a hypothetical eliminativist about consciousness would say to my colleague Michael Johnson in response to the question – “so you think consciousness is just a bag of tricks?”: No, consciousness is not a bag of tricks. It’s an illusion, Michael. A trick is what a Convolutional Neural Network needs to do to perform well on a text classification task. The illusion of consciousness is the radical ontological obfuscation that your brain performs in order to render its internal attentional dynamics as a helpful user-interface that even a kid can utilize for thinking.

Now, largely thanks to the fact that Integrated Information Theory (IIT) is being discussed openly, qualia formalism is (implicitly) starting to have its turn on the table. While we believe that IIT does not work out as a complete account of consciousness for a variety of reasons (our full critique of it is certainly over-due), we do strongly agree with its formalist take on consciousness. In fact, IIT might be the only mainstream theory of consciousness that assumes anything resembling qualia formalism. So its introduction into the water supply (so to speak) has given a lot of people the chance to ponder whether consciousness has a formal structure.

(3) Great New Psychedelic Research

The conference featured the amazing research of Robin Carhart-Harris, Anil K. Seth, and Selen Atasoy, all of whom are advancing the frontier of consciousness research by virtue of collecting new data, generating computational models to explain it, and developing big-picture accounts of psychedelic action. We’ve already featured Atasoy’s work in here. Her method of decomposing brain activity into harmonics is perhaps one of the most promising avenues for advancing qualia formalist accounts of consciousness (i.e. tentative data-structures in which the information about a given conscious state is encoded). Robin’s entropic brain theory is, we believe, a really good step in the right direction, and we hope to formalize how valence enters the picture in the future (especially as it pertains to being able to explain qualia annealing on psychedelic states). Finally, Anil is steel-manning the case for predictive coding’s role in psychedelic action, and, intriguingly, also advancing the field by trying to find out in exactly what ways the effects of psychedelics can be simulated with VR and strobe lights (cf. Algorithmic Reduction of Psychedelic States, and Getting Closer to Digital LSD).

(4) Superb Aesthetic

The Science of Consciousness brings together a group of people with eclectic takes on reality, extremely high Openness to Experience, uncompromising curiosity about consciousness, and wide-ranging academic backgrounds, and this results in an amazing aesthetic. In 2016 the underlying tone was set by Dorian Electra and Baba Brinkman, who contributed with consciousness-focused music and witty comedy (we need more of that kind of thing in the world). Dorian Electra even released an album titled “Magical Consciousness Conference” which discusses in a musical format classical topics of philosophy of mind such as: the mind-body problem, brains in vats, and the Chinese Room.

aesthetic_dorian

The Science of Consciousness conference carries a timeless aesthetic that is hard to describe. If I were forced to put a label on it, I would say it is qualia-aware paranormal-adjacent psychedelic meta-cognitive futurism, or something along those lines. For instance, see how you can spot philosophers of all ages vigorously dancing to the empiricists vs. rationalists song by Dorian Electra (featuring David Chalmers) at The End of Consciousness Party in this video. Yes, that’s the vibe of this conference. The conference also has a Poetry Slam on Friday in which people read poems about qualia, the binding problem, and psychedelics (this year I performed a philosophy of mind stand-up comedy sketch there). They also play the Zombie Blues that night, in which people take turns to sing about philosophical zombies. Here are some of Chalmers’ verses:

I act like you act

I do what you do

But I don’t know

What it’s like to be you

What consciousness is!

I ain’t got a clue

I got the Zombie Blues!!!

 


I asked Tononi:

“How conscious am I?”

He said “Let’s see…”

“I’ll measure your Phi”

He said “Oh Dear!”

“It’s zero for you!”

And that’s why you’ve got the Zombie Blues!!!

Noteworthy too is the presence of after-parties that end at 3AM, the liberal attitude on cannabis, and the crazy DMT art featured in the lobby. Here are some pictures we took late at night borrowing some awesome signs we found at a Quantum Healing stand.

(5) We found a number of QRI allies and supporters

Finally, we were very pleased to find that Qualia Computing readers and QRI supporters attended the conference. We also made some good new friends along the way, and on the whole we judged the conference to be very much worth our time. For example, we were happy to meet Link Swanson, who recently published his article titled Unifying Theories of Psychedelic Drug Effects. I in fact had read this article a week before the event and thought it was great. I was planning on emailing him after the conference, and I was pleasantly surprised to meet him in person there instead. If you met us at the conference, thanks for coming up and saying hi! Also, thank you to all who organized or ran the conference, and to all who attended it!

30174183_2059722060708154_528874706_o

QRI members, friends, and allies

 

What I Would Like to See More Of

(1) Qualia Formalism

We hope and anticipate that in future years the field of consciousness research will experience an interesting process in which theory proponents will come out as either formalists or anti-formalists. In the meantime, we would love to see more people at least taking seriously the vision of qualia formalism. One of the things we asked ourselves during the conference was: “Where can we find other formalists?”. Perhaps the best heuristic we explored was the simple strategy of going to the most relevant concurrent sessions (e.g. physics and consciousness, and fundamental theories of consciousness). Interestingly, the people who had more formalist intuitions also tended to take IIT seriously.

(2) Explicit Talk About Valence (and Reducing Suffering)

To our knowledge, our talks were the only ones in the event that directly addressed valence (i.e. the pleasure-pain axis). I wish there were more, given the paramount importance of affect in the brain’s computational processing, its role in culture, and of course, its ethical relevance. What is the point of meaning and intelligence if one cannot be happy?

There was one worthy exception: at some point Stuart Hameroff briefly mentioned his theory about the origin of life. He traces the evolutionary lineage of life to molecular micro-environmental system in which “quantum events [are] shielded from random, polar interactions, enabling more intense and pleasurable [Objective Reduction] qualia. ” In his view, pleasure-potential maximization is at the core of the design of the nervous system. I am intrigued by this theory, and I am glad that valence enters the picture here. I would just want to extend this kind of work to include the role of suffering as well. It seems to me that the brain evolved an adaptive range of valence that sinks deep into the negative, and is certainly not just optimizing for pleasure. While our post-human descendants might enjoy information-sensitive gradients of bliss, us Darwinians have been “gifted” by evolution a mixture of negative and positive hedonic qualia.

(3) Awareness of the Tyranny of the Intentional Object

Related to (2), we think that one of the most important barriers for making progress in valence research is the fact that most people (even neuroscientists and philosophers of mind) think of it as a very personal thing with no underlying reality beyond hearsay or opinion. Some people like ice-cream, some like salads. Some people like Pink Floyd, others like Katy Perry. So why should we think that there is a unifying equation for bliss? Well, in our view, nobody actually likes ice-cream or Pink Floyd. Rather, ice-cream and Pink Floyd trigger high-valence states, and it is the high valence states that are actually liked and valuable. Our minds are constructed in such a way that we project pleasure and pain out into the world and think of them as necessarily connected to the external state of affairs. But this, we argue, is indeed an illusion (unlike qualia, which is as real as it gets).

Even the people in the Artificial Intelligence and Machine Consciousness plenary panel seemed subject to the Tyranny of the Intentional Object. During the Q&A section I asked them: “if you were given a billion dollars to build a brain or machine that could experience super-happiness, how would you go about doing so?” Their response was that happiness/bliss only makes sense in relational terms (i.e. by interacting with others in the real world). Someone even said that “dopamine in the brain is just superficial happiness… authentic happiness requires you to gain meaning from what you do in the world.” This is a common view to take, but I would also point out that if it is possible to generate valence in artificial minds without human interactions, generating high valence could be done more directly. Finding methods to modulate valence would be done more efficiently by seeking out foundational qualia formalist accounts of valence.

(4) Bigger Role for the Combination Problem

The number of people who account for the binding problem (also called the combination or boundary problem) is vanishingly small. How and why consciousness appears as unitary is a deep philosophical problem that cannot be dismissed with simple appeals to illusionism or implicit information processing. In general, my sense has been that many neuroscientists, computer scientists, and philosophers of mind don’t spend much time thinking about the binding problem. I have planned an article that will go in depth about why it might be that people don’t take this problem more seriously. As David Pearce has eloquently argued, any scientific theory of consciousness has to explain the binding problem. Nowadays, almost no one addresses it (and much less compellingly provides any plausible solution to it). The conference did have one concurrent session called “Panpsychism and the Combination Problem” (which I couldn’t attend), and a few more people I interacted with seemed to care about it, but the proportion was very small.

(5) Bumping-up the Effect Size of Psi Phenomena (if they are real)

There is a significant amount of interest in Psi (parapsychology) from people attending this conference. I myself am agnostic about the matter. The Institute of Noetic Science (IONS) conducts interesting research in this area, and there are some studies that argue that publication bias cannot explain the effects observed. I am not convinced that other explanations have been ruled out, but I am sympathetic to people who try to study weird phenomena within a serious scientific framework (as you might tell from this article). What puzzles me is why there aren’t more people advocating for increasing the effect size of these effects in order to study them. Some data suggests that Psi (in the form of telepathy) is stronger with twins, meditators, people on psychedelics, and people who believe in Psi. But even then the effect sizes reported are tiny. Why not go all-in and try to max out the effect size by combining these features? I.e. why not conduct studies with twins who claim to have had psychic experiences, who meditate a lot, and who can handle high doses of LSD and ketamine in sensory deprivation tanks? If we could bump up the effect sizes far enough, maybe we could definitively settle the matter.

(6) And why not… also a lab component?

Finally, I think that trip reports by philosophically-literate cognitive scientists are much more valuable than trip reports by the average Joe. I would love to see a practical component to the conference someday. The sort of thing that would lead to publications like: “The Phenomenology of Candy-Flipping: An Empirical First-Person Investigation with Philosophers of Mind at a Consciousness Conference.”

Additional Observations

The Cards and Deck Types of Consciousness Theories

To make the analogy between Magic decks and theories of consciousness, we need to find a suitable interpretation for a card. In this case, I would posit that cards can be interpreted as either background assumptions, required criteria, emphasized empirical findings, and interpretations of phenomena. Let’s call these, generally, components of a theory.

Like we see in Magic, we will also find that some components support each other while others interact neutrally or mutually exclude each other. For example, if one’s theory of consciousness explicitly rejects the notion that quantum mechanics influences consciousness, then it is irrelevant whether one also postulates that the Copenhagen interpretation of quantum mechanics is correct. On the other hand, if one identifies the locus of consciousness to be in the microtubules inside pyramidal cells, then the particular interpretation of quantum mechanics one has is of paramount importance.

– Qualia Computing in Tucson: The Magic Analogy (2016)

In the 2016 writeup of the conference I pointed out that the dominant theories of consciousness (i.e. deck types in the above sense) were:

  1. Integrated Information Theory (IIT)
  2. Orchestrated Objective Reduction (Orch OR)
  3. Prediction Error Minimization (PEM)
  4. Global Neuronal Workspace Theory (GNWS)
  5. Panprotopsychist (not explicitly named)
  6. Nondual Consciousness Monism (not explicitly named)
  7. Consciousness as the Result of Action-Oriented Cognition (not explicitly named)
  8. Higher Order Thought Theory (HOT)

So how has the meta-game changed since then? Based on the plenary presentations, the concurrent sessions, the workshops, the posters, and my conversations with many of the participants, I’d say (without much objective proof) that the new meta-game now looks more or less like this:

  1. Orchestrated Objective Reduction (Orch OR)
  2. Integrated Information Theory (IIT)
  3. Entropic Brain Theory (EBT)
  4. Global Neuronal Workspace Theory (GNWS)
  5. Prediction Error Minimization (PEM)
  6. Panprotopsychist as a General Framework
  7. Harmonic-Resonant Theories of Consciousness

It seems that Higher Order Thought (HOT) theories of consciousness have fallen out of favor. Additionally, we have a new contender on the table: Harmonic-Resonant Theories of Consciousness is now slowly climbing up the list (which, it turns out, had already been in the water supply since 2006 when Steven Lehar attended the conference, but only now is gathering popular support).

Given the general telos of the conference, it is not surprising that deflationary theories of consciousness do not seem to have a strong representation. I found a few people here and there who would identify as illusionists, but there were not enough to deserve their place in a short-list of dominant deck types. I assume it would be rather unpleasant for people with this general view about consciousness to hang out with so many consciousness realists.

A good number of people I talked to admitted that they didn’t understand IIT, but that they nonetheless felt that something along the lines of irreducible causality was probably a big part of the explanation for consciousness. In contrast, we also saw a few interesting reactions to IIT – some people said “I hate IIT” and “don’t get me started on IIT”. It is unclear what is causing such reactions, but they are worth noting. Is this an allergic reaction to qualia formalism? We don’t have enough information at the moment to know.

Ontological Violence

The spiritual side of consciousness research is liable to overfocus on ethics and mood hacks rather than on truth-seeking. The problem is that a lot of people have emotionally load-bearing beliefs and points of view connected to how they see reality’s big plot. This is a generalized phenomenon, but its highest expression is found within spiritually-focused thinkers. Many of them come across as evangelizers rather than philosophers, scientists, explorers, or educators. For example: two years ago, David Pearce and I had an uncomfortable conversation with a lady who had a very negative reaction to Pearce’s take on suffering (i.e. that we should use biotechnology to eradicate it). She insisted suffering was part of life and that bliss can’t exist without it (a common argument for sure, but the problem was the intense emotional reaction and insistence on continuing the conversation until we had changed our minds).

We learned our lesson – if you suspect that a person has emotionally load-bearing beliefs about a grand plan or big spiritual telos, don’t mention you are trying to reduce suffering with biotechnology. It’s a gamble, and the chance for a pleasant interaction and meaningful exchange of ideas is not worth the risk of interpersonal friction, time investment, and the pointlessness of a potential ensuing heated discussion.

This brings me to an important matter…

Who are the people who are providing genuinely new contributions to the conversation?

There is a lot of noise in the field of consciousness research. Understandably, a lot of people react to this state of affairs with generalized skepticism (and even cynicism). In my experience, if you approach a neuroscientist in order to discuss consciousness, she will usually choose to simply listen to her priors rather than to you (no matter how philosophically rigorous and scientifically literate you may be).

And yet, at this conference and many other places, there are indeed a lot of people who have something new and valuable to contribute to our understanding of consciousness. So who are they? What allows a person to make a novel contribution?

I would suggest that people who fall into one of the following four categories have a higher chance of this:

  1. People who have new information
  2. Great synthesizers
  3. Highly creative people with broad knowledge of the field
  4. New paradigm proposers

For (1): This can take one of three forms: (a) New information about phenomenology (i.e. rational psychonauts with strong interpretation and synthesis skills). (b) New third-person data (i.e. as provided by scientists who conduct new research on e.g. neuroimaging). And (c) new information on how to map third-person data to phenomenology, especially about rare states of consciousness (i.e. as obtained from people who have both access to third-person data sources and excellent experienced phenomenologists). (a) Is very hard to come by because most psychonauts and meditators fall for one or more traps (e.g. believing in the tyranny of the intentional object, being direct realists, being dogmatic about a given pre-scientific metaphysic, etc.). (b) Is constrained by the number of labs and the current Kuhnian paradigms within which they work. And (c) is not only rare, but currently nonexistent. Hence, there are necessarily few people who can contribute to the broader conversation about consciousness by bringing new information to the table.

For (2): Great synthesizers are hard to come by. They do not need to generate new paradigms or have new information. What they have is the key ability to find what the novel contribution in a given proposal is. They gather what is similar and different across paradigms, and make effective lossless compressions – saving us all valuable time, reducing confusion, and facilitating the cross-pollination between various disciplines and paradigms. This requires the ability to extract what matters from large volumes of extremely detailed and paradigm-specific literature. Hence, it is also rare to find great synthesizers.

For (3): Being able to pose new questions, and generate weird but not random hypotheses can often be very useful. Likewise, being able to think of completely outrageous outside-the-box views might be key for advancing our understanding of consciousness. That said, non-philosophers tend to underestimate just how weird an observation about consciousness needs to be for it to be new. This in practice constrains the range of people able to contribute in this way to people who are themselves fairly well acquainted with a broad range of theories of consciousness. That said, I suspect that this could be remedied by forming groups of people who bring different talents to the table. In Peaceful Qualia I discussed a potential empirical approach for investigating consciousness which involves having people who specialize in various aspects of the problem (e.g. being great psychonauts, excellent third-person experimentalists, high-quality synthesizers, solid computational modelers, and so on). But until then, I do not anticipate much progress will come from people who are simply very smart and creative – they also need to have either privileged information (such as what you get from combining weird drugs and brain-computer interfaces), or be very knowledgeable about what is going on in the field.

And (4): This is the most difficult and rarest of all, for it requires some degree of the previous three attributes. Their work wouldn’t be possible without the work of many other people in the previous three categories. Yet, of course, they will be the most world-changing of them all. Explicitly, this is the role that we are aiming for at the Qualia Research Institute.

In addition to the above, there are other ways of making highly valuable contributions to the conversation. An example would be those individuals who have become living expressions of current theories of consciousness. That is, people who have deeply understood some paradigm and can propose paradigm-consistent explanations for completely new evidence. E.g. people who can quickly figure out “what would Tononi say about X?” no matter how weird X is. It is my view that one can learn a lot from people in this category. That said… don’t ever expect to change their minds!

A Final Suggestion: Philosophical Speed Dating

To conclude, I would like to make a suggestion in order to increase the value of this and similar conferences: philosophical speed dating. This might be valuable for two reasons. First, I think that a large percentage of people who attend TSC are craving interactions with others who also wonder about consciousness. After all, being intrigued and fascinated by this topic is not very common. Casual interest? Sure. But obsessive curiosity? Pretty uncommon. And most people who attend TCS are in the latter category. At the same time, it is never very pleasant to interact with people who are likely to simply not understand where you are coming from. The diversity of views is so large that finding a person with whom you can have a cogent and fruitful conversation is quite difficult for a lot of people. A Philosophical Speed Dating session in which people quickly state things like their interest in consciousness, take on qualia, preferred approaches, favorite authors, paradigm affinities, etc. would allow philosophical kindred souls to meet at a much higher rate.

And second, in the context of advancing the collective conversation about consciousness, I have found that having people who know where you are coming from (and either share or understand your background assumptions) is the best way to go. The best conversations I’ve had with people usually arise when we have a strong base of shared knowledge and intuitions, but disagree on one or two key points we can identify and meaningfully address. Thus a Philosophical Speed Dating session could lead to valuable collaborations.

And with that, I would like to say: If you do find our approach interesting or worth pursuing, do get in touch.

Till next time, Tucson!


* In Chalmer’s paper about the Meta-Problem of Consciousness he describes his reason for investigating the subject: “Upon hearing about this article, some people have wondered whether I am converting to illusionism, while others have suspected that I am trying to subvert the illusionist program for opposing purposes. Neither reaction is quite correct. I am really interested in the meta-problem as a problem in its own right. But if one wants to place the paper within the framework of old battles, one might think of it as lending opponents a friendly helping hand.” The quality of a philosopher should not be determined only by their ability to make a good case for their views, but also by the capacity to talk convincingly about their opponent’s. And on that metric, David is certainly world-class.

Everything in a Nutshell

David Pearce at Quora in response to the question: “What are your philosophical positions in one paragraph?“:

“Everyone takes the limits of his own vision for the limits of the world.”
(Schopenhauer)

All that matters is the pleasure-pain axis. Pain and pleasure disclose the world’s inbuilt metric of (dis)value. Our overriding ethical obligation is to minimise suffering. After we have reprogrammed the biosphere to wipe out experience below “hedonic zero”, we should build a “triple S” civilisation based on gradients of superhuman bliss. The nature of ultimate reality baffles me. But intelligent moral agents will need to understand the multiverse if we are to grasp the nature and scope of our wider cosmological responsibilities. My working assumption is non-materialist physicalism. Formally, the world is completely described by the equation(s) of physics, presumably a relativistic analogue of the universal Schrödinger equation. Tentatively, I’m a wavefunction monist who believes we are patterns of qualia in a high-dimensional complex Hilbert space. Experience discloses the intrinsic nature of the physical: the “fire” in the equations. The solutions to the equations of QFT or its generalisation yield the values of qualia. What makes biological minds distinctive, in my view, isn’t subjective experience per se, but rather non-psychotic binding. Phenomenal binding is what consciousness is evolutionarily “for”. Without the superposition principle of QM, our minds wouldn’t be able to simulate fitness-relevant patterns in the local environment. When awake, we are quantum minds running subjectively classical world-simulations. I am an inferential realist about perception. Metaphysically, I explore a zero ontology: the total information content of reality must be zero on pain of a miraculous creation of information ex nihilo. Epistemologically, I incline to a radical scepticism that would be sterile to articulate. Alas, the history of philosophy twinned with the principle of mediocrity suggests I burble as much nonsense as everyone else.


Image credit: Joseph Matthias Young

The Universal Plot: Part I – Consciousness vs. Pure Replicators

“It seems plain and self-evident, yet it needs to be said: the isolated knowledge obtained by a group of specialists in a narrow field has in itself no value whatsoever, but only in its synthesis with all the rest of knowledge and only inasmuch as it really contributes in this synthesis toward answering the demand, ‘Who are we?'”

– Erwin Schrödinger in Science and Humanism (1951)

 

“Should you or not commit suicide? This is a good question. Why go on? And you only go on if the game is worth the candle. Now, the universe is been going on for an incredibly long time. Really, a satisfying theory of the universe should be one that’s worth betting on. That seems to me to be absolutely elementary common sense. If you make a theory of the universe which isn’t worth betting on… why bother? Just commit suicide. But if you want to go on playing the game, you’ve got to have an optimal theory for playing the game. Otherwise there’s no point in it.”

Alan Watts, talking about Camu’s claim that suicide is the most important question (cf. The Most Important Philosophical Question)

In this article we provide a novel framework for ethics which focuses on the perennial battle between wellbeing-oriented consciousness-centric values and valueless patterns who happen to be great at making copies of themselves (aka. Consciousness vs. Pure Replicators). This framework extends and generalizes modern accounts of ethics and intuitive wisdom, making intelligible numerous paradigms that previously lived in entirely different worlds (e.g. incongruous aesthetics and cultures). We place this worldview within a novel scale of ethical development with the following levels: (a) The Battle Between Good and Evil, (b) The Balance Between Good and Evil, (c) Gradients of Wisdom, and finally, the view that we advocate: (d) Consciousness vs. Pure Replicators. More so, we analyze each of these worldviews in light of our philosophical background assumptions and posit that (a), (b), and (c) are, at least in spirit, approximations to (d), except that they are less lucid, more confused, and liable to exploitation by pure replicators. Finally, we provide a mathematical formalization of the problem at hand, and discuss the ways in which different theories of consciousness may affect our calculations. We conclude with a few ideas for how to avoid particularly negative scenarios.

Introduction

Throughout human history, the big picture account of the nature, purpose, and limits of reality has evolved dramatically. All religions, ideologies, scientific paradigms, and even aesthetics have background philosophical assumptions that inform their worldviews. One’s answers to the questions “what exists?” and “what is good?” determine the way in which one evaluates the merit of beings, ideas, states of mind, algorithms, and abstract patterns.

Kuhn’s claim that different scientific paradigms are mutually unintelligible (e.g. consciousness realism vs. reductive eliminativism) can be extended to worldviews in a more general sense. It is unlikely that we’ll be able to convey the Consciousness vs. Pure Replicators paradigm by justifying each of the assumptions used to arrive to it one by one starting from current ways of thinking about reality. This is because these background assumptions support each other and are, individually, not derivable from current worldviews. They need to appear together as a unit to hang together tight. Hence, we now make the jump and show you, without further due, all of the background assumptions we need:

  1. Consciousness Realism
  2. Qualia Formalism
  3. Valence Structuralism
  4. The Pleasure Principle (and its corollary The Tyranny of the Intentional Object)
  5. Physicalism (in the causal sense)
  6. Open Individualism (also compatible with Empty Individualism)
  7. Universal Darwinism

These assumptions have been discussed in previous articles. In the meantime, here is a brief description: (1) is the claim that consciousness is an element of reality rather than simply the improper reification of illusory phenomena, such that your conscious experience right now is as much a factual and determinate aspect of reality as, say, the rest mass of an electron. In turn, (2) qualia formalism is the notion that consciousness is in principle quantifiable. Assumption (3) states that valence (i.e. the pleasure/pain axis, how good an experience feels) depends of the structure of such experience (more formally, on the properties of the mathematical object isomorphic to its phenomenology).

(4) is the assumption that people’s behavior is motivated by the pleasure-pain axis even when they think that’s not the case. For instance, people may explicitly represent the reason for doing things in terms of concrete facts about the circumstance, and the pleasure principle does not deny that such reasons are important. Rather, it merely says that such reasons are motivating because one expects/anticipates less negative valence or more positive valence. The Tyranny of the Intentional Object describes the fact that we attribute changes in our valence to external events and objects, and believe that such events and objects are intrinsically good (e.g. we think “icecream is great” rather than “I feel good when I eat icecream”).

Physicalism (5) in this context refers to the notion that the equations of physics fully describe the causal behavior of reality. In other words, the universe behaves according to physical laws and even consciousness has to abide by this fact.

Open Individualism (6) is the claim that we are all one consciousness, in some sense. Even though it sounds crazy at first, there are rigorous philosophical arguments in favor of this view. Whether this is true or not is, for the purpose of this article, less relevant than the fact that we can experience it as true, which happens to have both practical and ethical implications for how society might evolve.

Finally, (7) Universal Darwinism refers to the claim that natural selection works at every level of organization. The explanatory power of evolution and fitness landscapes generated by selection pressures is not confined to the realm of biology. Rather, it is applicable all the way from the quantum foam to, possibly, an ecosystem of universes.

The power of a given worldview is not only its capacity to explain our observations about the inanimate world and the quality of our experience, but also in its capacity to explain *in its own terms* the reasons for why other worldviews are popular as well. In what follows we will utilize these background assumptions to evaluate other worldviews.

 

The Four Worldviews About Ethics

The following four stages describe a plausible progression of thoughts about ethics and the question “what is valuable?” as one learns more about the universe and philosophy. Despite the similarity of the first three levels to the levels of other scales of moral development (e.g. this, this, this, etc.), we believe that the fourth level is novel, understudied, and very, very important.

1. The “Battle Between Good and Evil” Worldview

“Every distinction wants to become the distinction between good and evil.” – Michael Vassar (source)

Common-sensical notions of essential good and evil are pre-scientific. For reasons too complicated to elaborate on for the time being, the human mind is capable of evoking an agentive sense of ultimate goodness (and of ultimate evil).

maxresdefault

Good vs. Evil? God vs. the Devil?

Children are often taught that there are good people and bad people. That evil beings exist objectively, and that it is righteous to punish them and see them with scorn. On this level people reify anti-social behaviors as sins.

Essentializing good and evil, and tying it up to entities seems to be an early developmental stage of people’s conception of ethics, and many people end up perpetually stuck in here. Several religions (specially the Abrahamic ones) are often practiced in such a way so as to reinforce this worldview. That said, many ideologies take advantage of the fact that a large part of the population is at this level to recruit adherents by redefining “what good and bad is” according to the needs of such ideologies. As a psychological attitude (rather than as a theory of the universe), reactionary and fanatical social movements often rely implicitly on this way of seeing the world, where there are bad people (jews, traitors, infidels, over-eaters, etc.) who are seen as corrupting the soul of society and who deserve to have their fundamental badness exposed and exorcised with punishment in front of everyone else.

15d8a1999197da27374e911b1cba769a--satan-god-is-good

Traditional notions of God vs. the Devil can be interpreted as the personification of positive and negative valence

Implicitly, this view tends to gain psychological strength from the background assumptions of Closed Individualism (which allows you to imagine that people can be essentially bad). Likewise, this view tends to be naïve about the importance of valence in ethics. Good feelings are often interpreted as the result of being aligned with fundamental goodness, rather than as positive states of consciousness that happen to be triggered by a mix of innate and programmable things (including cultural identifications). More so, good feelings that don’t come in response to the preconceived universal order are seen as demonic and aberrant.

From our point of view (the 7 background assumptions above) we interpret this particular worldview as something that we might be biologically predisposed to buy into. Believing in the battle between good and evil was probably evolutionarily adaptive in our ancestral environment, and might reduce many frictional costs that arise from having a more subtle view of reality (e.g. “The cheaper people are to model, the larger the groups that can be modeled well enough to cooperate with them.” – Michale Vassar). Thus, there are often pragmatic reasons to adopt this view, specially when the social environment does not have enough resources to sustain a more sophisticated worldview. Additionally, at an individual level, creating strong boundaries around what is or not permissible can be helpful when one has low levels of impulse control (though it may come at the cost of reduced creativity).

On this level, explicit wireheading (whether done right or not) is perceived as either sinful (defying God’s punishment) or as a sort of treason (disengaging from the world). Whether one feels good or not should be left to the whims of the higher order. On the flipside, based on the pleasure principle it is possible to interpret the desire to be righteous as being motivated by high valence states, and reinforced by social approval, all the while the tyranny of the intentional object cloaks this dynamic.

It’s worth noting that cultural conservativism, low levels of the psychological constructs of Openness to Experience and Tolerance of Ambiguity , and high levels of Need for Closure, all predict getting stuck in this worldview for one’s entire life.

2. The “Balance Between Good and Evil” Worldview

TVTropes has a great summary of the sorts of narratives that express this particular worldview and I highly recommend reading that article to gain insight into the moral attitudes compatible with this view. For example, here are some reasons why Good cannot or should not win:

Good winning includes: the universe becoming boring, society stagnating or collapsing from within in the absence of something to struggle against or giving people a chance to show real nobility and virtue by risking their lives to defend each other. Other times, it’s enforced by depicting ultimate good as repressive (often Lawful Stupid), or by declaring concepts such as free will or ambition as evil. In other words “too much of a good thing”.

Balance Between Good and Evil by tvtropes

Now, the stated reasons why people might buy into this view are rarely their true reasons. Deep down, the Balance Between Good and Evil is adopted because: people want to differentiate themselves from those who believe in (1) to signal intellectual sophistication, they experience learned helplessness after trying to defeat evil without success (often in the form of resilient personal failings or societal flaws), they find the view compelling at an intuitive emotional level (i.e. they have internalized the hedonic treadmill and project it onto the rest of reality).

In all of these cases, though, there is something somewhat paradoxical about holding this view. And that is that people report that coming to terms with the fact that not everything can be good is itself a cause of relief, self-acceptance, and happiness. In other words, holding this belief is often mood-enhancing. One can also confirm the fact that this view is emotionally load-bearing by observing the psychological reaction that such people have to, for example, bringing up the Hedonistic Imperative (which asserts that eliminating suffering without sacrificing anything of value is scientifically possible), indefinite life extension, or the prospect of super-intelligence. Rarely are people at this level intellectually curious about these ideas, and they come up with excuses to avoid looking at the evidence, however compelling it may be.

For example, some people are lucky enough to be born with a predisposition to being hyperthymic (which, contrary to preconceptions, does the opposite of making you a couch potato). People’s hedonic set-point is at least partly genetically determined, and simply avoiding some variants of the SCN9A gene with preimplantation genetic diagnosis would greatly reduce the number of people who needlessly suffer from chronic pain.

But this is not seen with curious eyes by people who hold this or the previous worldview. Why? Partly this is because it would be painful to admit that both oneself and others are stuck in a local maxima of wellbeing and that examining alternatives might yield very positive outcomes (i.e. omission bias). But at its core, this willful ignorance can be explained as a consequence of the fact that people at this level get a lot of positive valence from interpreting present and past suffering in such a way that it becomes tied to their core identity. Pride in having overcome their past sufferings, and personal attachment to their current struggles and anxieties binds them to this worldview.

If it wasn’t clear from the previous paragraph, this worldview often requires a special sort of chronic lack of self-insight. It ultimately relies on a psychological trick. One never sees people who hold this view voluntarily breaking their legs, taking poison, or burning their assets to increase the goodness elsewhere as an act of altruism. Instead, one uses this worldview as a mood-booster, and in practice, it is also susceptible to the same sort of fanaticism as the first one (although somewhat less so). “There can be no light without the dark. And so it is with magic. Myself, I always try to live within the light.” – Horace Slughorn.

315eab27545ea96c67953c54358fe600Additionally, this view helps people rationalize the negative aspects of one’s community and culture. For example, it not uncommon for people to say that buying factory farmed meat is acceptable on the grounds that “some things have to die/suffer for others to live/enjoy life.” Balance Between Good and Evil is a close friend of status quo bias.

Hinduism, Daoism, and quite a few interpretations of Buddhism work best within this framework. Getting closer to God and ultimate reality is not done by abolishing evil, but by embracing the unity of all and fostering a healthy balance between health and sickness.

It’s also worth noting that the balance between good and evil tends to be recursively applied, so that one is not able to “re-define our utility function from ‘optimizing the good’ to optimizing ‘the balance of good and evil’ with a hard-headed evidence-based consequentialist approach.” Indeed, trying to do this is then perceived as yet another incarnation of good (or evil) which needs to also be balanced with its opposite (willful ignorance and fuzzy thinking). One comes to the conclusion that it is the fuzzy thinking itself that people at this level are after: to blur reality just enough to make it seem good, and to feel like one is not responsible for the suffering in the world (specially by inaction and lack of thinking clearly about how one could help). “Reality is only a Rorschach ink-blot, you know” – Alan Watts. So this becomes a justification for thinking less than one really has to about the suffering in the world. Then again, it’s hard to blame people for trying to keep the collective standards of rigor lax, given the high proportion of fanatics who adhere to the “battle between good and evil” worldview, and who will jump the gun to demonize anyone who is slacking off and not stressed out all the time, constantly worrying about the question “could I do more?”

(Note: if one is actually trying to improve the world as much as possible, being stressed out about it all the time is not the right policy).

3. The “Gradients of Wisdom” Worldview

David Chapman’s HTML book Meaningness might describe both of the previous worldviews as variants of eternalism. In the context of his work, eternalism refers to the notion that there is an absolute order and meaning to existence. When applied to codes of conduct, this turns into “ethical eternalism”, which he defines as: “the stance that there is a fixed ethical code according to which we should live. The eternal ordering principle is usually seen as the source of the code.” Chapman eloquently argues that eternalism has many side effects, including: deliberate stupidity, attachment to abusive dynamics, constant disappointment and self-punishment, and so on. By realizing that, in some sense, no one knows what the hell is going on (and those who do are just pretending) one takes the first step towards the “Gradients of Wisdom” worldview.

At this level people realize that there is no evil essence. Some might talk about this in terms of there “not being good or bad people”, but rather just degrees of impulse control, knowledge about the world, beliefs about reality, emotional stability, and so on. A villain’s soul is not connected to some kind of evil reality. Rather, his or her actions can be explained by the causes and conditions that led to his or her psychological make-up.

Sam Harris’ ideas as expressed in The Moral Landscape evoke this stage very clearly. Sam explains that just as health is a fuzzy but important concept, so is psychological wellbeing, and that for such a reason we can objectively assess cultures as more or less in agreement with human flourishing. the-science-of-morality-7-728

Indeed, many people who are at this level do believe in valence structuralism, where they recognize that there are states of consciousness that are inherently better in some intrinsic subjective value sense than others.

However, there is usually no principled framework to assess whether a certain future is indeed optimal or not. There is little hard-headed discussion of population ethics for fear of sounding unwise or insensitive. And when push comes to shove, they lack good arguments to decisively rule out why particular situations might be bad. In other words, there is room for improvement, and such improvement might eventually come from more rigor and bullet-bitting.  In particular, a more direct examination of the implications of: Open Individualism, the Tyranny of the Intentional Object, and Universal Darwinism can allow someone on this level to make a breakthrough. Here is where we come to:

4. The “Consciousness vs. Pure Replicators” Worldview

In Wireheading Done Right we introduced the concept of a pure replicator:

I will define a pure replicator, in the context of agents and minds, to be an intelligence that is indifferent towards the valence of its conscious states and those of others. A pure replicator invests all of its energy and resources into surviving and reproducing, even at the cost of continuous suffering to themselves or others. Its main evolutionary advantage is that it does not need to spend any resources making the world a better place.

Presumably our genes are pure replicators. But we, as sentient minds who recognize the intrinsic value (both positive and negative) of conscious experiences, are not pure replicators. Thanks to a myriad of fascinating dynamics, it so happened that making minds who love, appreciate, think creatively, and philosophize was a side effect of the process of refining the selfishness of our genes. We must not take for granted that we are more than pure replicators ourselves, and that we care both about our wellbeing and the wellbeing of others. The problem now is that the particular selection pressures that led to this may not be present in the future. After all, digital and genetic technologies are drastically changing the fitness landscape for patterns that are good at making copies of themselves.

In an optimistic scenario, future selection pressures will make us all naturally gravitate towards super-happiness. This is what David Pearce posits in his essay “The Biointelligence Explosion”:

As the reproductive revolution of “designer babies” gathers pace, prospective parents will pre-select alleles and allelic combinations for a new child in anticipation of their behavioural effects – a novel kind of selection pressure to replace the “blind” genetic roulette of natural selection. In time, routine embryo screening via preimplantation genetic diagnosis will be complemented by gene therapy, genetic enhancement and then true designer zygotes. In consequence, life on Earth will also become progressively happier as the hedonic treadmill is recalibrated. In the new reproductive era, hedonic set-points and intelligence alike will be ratcheted upwards in virtue of selection pressure. For what parent-to-be wants to give birth to a low-status depressive “loser”? Future parents can enjoy raising a normal transhuman supergenius who grows up to be faster than Usain Bolt, more beautiful than Marilyn Monroe, more saintly than Nelson Mandela, more creative than Shakespeare – and smarter than Einstein.

– David Pearce in The Biointelligence Explosion

In a pessimistic scenario, the selection pressures lead to the opposite direction, where negative experiences are the only states of consciousness that happen to be evolutionarily adaptive, and so they become universally used.

There is a number of thinkers and groups who can be squarely placed on this level, and relative to the general population, they are extremely rare (see: The Future of Human Evolution,  A Few Dystopic Future Scenarios,  Book Review: Age of EM, Nick Land’s Gnon, Spreading Happiness to the Stars Seems Little Harder than Just Spreading, etc.). See also**. What is much needed now, is formalizing the situation and working out what we could do about it. But first, some thoughts about the current state of affairs.

There is at least some encouraging facts that suggest it is not too late to prevent a pure replicator takeover. There are memes, states of consciousness, and resources that can be used in order to steer evolution in a positive directions. In particular, as of 2017:

  1. A very big proportion of the economy is dedicated to trading positive experiences for money, rather than just survival or power tools. Thus an economy of information about states of consciousness is still feasible.
  2. There is a large fraction of the population who is altruistic and would be willing to cooperate with the rest of the world to avoid catastrophic scenarios.
  3. Happy people are more motivated, productive, engaged, and ultimately, economically useful (see hyperthimic temperament).
  4. Many people have explored Open Individualism and are interested (or at least curious) about the idea that we are all one.
  5. A lot of people are fascinated by psychedelics and the non-ordinary states of consciousness that they induce.
  6. MDMA-like consciousness is both very positive in terms of its valence, but also, amazingly, extremely pro-social, and future sustainable versions of it could be recruited to stabilize societies where the highest value is the collective wellbeing.

It is important to not underestimate the power of the facts laid out above. If we get our act together and create a Manhattan Project of Consciousness we might be able to find sustainable, reliable, and powerful methods that stabilize a hyper-motivated, smart, super-happy and super-prosocial state of consciousness in a large fraction of the population. In the future, we may all by default identify with consciousness itself rather than with our bodies (or our genes), and be intrinsically (and rationally) motivated to collaborate with everyone else to create as much happiness as possible as well as to eradicate suffering with technology. And if we are smart enough, we might also be able to solidify this state of affairs, or at least shield it against pure replicator takeovers.

The beginnings of that kind of society may already be underway. Consider for example the contrast between Burning Man and Las Vegas. Burning Man is a place that works as a playground for exploring post-Darwinean social dynamics, in which people help each other overcome addictions and affirm their commitment to helping all of humanity. Las Vegas, on the other hand, might be described as a place that is filled to the top with pure replicators in the forms of memes, addictions, and denial. The present world has the potential for both kind of environments, and we do not yet know which one will outlive the other in the long run.

Formalizing the Problem

We want to specify the problem in a way that will make it mathematically intelligible. In brief, in this section we focus on specifying what it means to be a pure replicator in formal terms. Per the definition, we know that pure replicators will use resources as efficiently as possible to make copies of themselves, and will not care about the negative consequences of their actions. And in the context of using brains, computers, and other systems whose states might have moral significance (i.e. they can suffer), they will simply care about the overall utility of such systems for whatever purpose they may require. Such utility will be a function of both the accuracy with which the system performs it’s task, as well as its overall efficiency in terms of resources like time, space, and energy.

Simply phrased, we want to be able to answer the question: Given a certain set of constraints such as energy, matter, and physical conditions (temperature, radiation, etc.), what is the amount of pleasure and pain involved in the most efficient implementation of a given predefined input-output mapping?

system_specifications

The image above represents the relevant components of a system that might be used for some purpose by an intelligence. We have the inputs, the outputs, the constraints (such as temperature, materials, etc.) and the efficiency metrics. Let’s unpack this. In the general case, an intelligence will try to find a system with the appropriate trade-off between efficiency and accuracy. We can wrap up this as an “efficiency metric function”, e(o|i, s, c) which encodes the following meaning: “e(o|i, s, c) = the efficiency with which a given output is generated given the input, the system being used, and the physical constraints in place.”

basic_system

Now, we introduce the notion of the “valence for the system given a particular input” (i.e. the valence for the system’s state in response to such an input). Let’s call this v(s|i). It is worth pointing out that whether valence can be computed, and whether it is even a meaningfully objective property of a system is highly controversial (e.g. “Measuring Happiness and Suffering“). Our particular take (at QRI) is that valence is a mathematical property that can be decoded from the mathematical object whose properties are isomorphic to a system’s phenomenology (see: Principia Qualia: Part II – Valence, and also Quantifying Bliss). If so, then there is a matter of fact about just how good/bad an experience is. For the time being we will assume that valence is indeed quantifiable, given that we are working under the premise of valence structuralism (as stated in our list of assumptions). We thus define the overall utility for a given output as U(e(o|i, s, c), v(s|i)), where the valence of the system may or may not be taken into account. In turn, an intelligence is said to be altruistic if it cares about the valence of the system in addition to its efficiency, so that it’s utility function penalizes negative valence (and rewards positive valence).

valence_altruism

Now, the intelligence (altruistic or not) utilizing the system will also have to take into account the overall range of inputs the system will be used to process in order to determine how valuable the system is overall. For this reason, we define the expected value of the system as the utility of each input multiplied by its probability.

input_probabilities

(Note: a more complete formalization would also weight in the importance of each input-output transformation, in addition to their frequency). Moving on, we can now define the overall expected utility for the system given the distribution of inputs it’s used for, its valence, its efficiency metrics, and its constraints as E[U(s|v, e, c, P(I))]:

chosen_system

The last equation shows that the intelligence would choose the system that maximizes E[U(s|v, e, c, P(I))].

Pure replicators will be better at surviving as long as the chances of reproducing do not depend on their altruism. If altruism does not reduce such reproductive fitness, then:

Given two intelligences that are competing for existence and/or resources to make copies of themselves and fight against other intelligences, there is going to be a strong incentive to choose a system that maximizes the efficiency metrics regardless of the valence of the system.

In the long run, then, we’d expect to see only non-altruistic intelligences (i.e. intelligences with utility functions that are indifferent to the valence of the systems it uses to process information). In other words, as evolution pushes intelligences to optimize the efficiency metrics of the systems they employ, it also pushes them to stop caring about the wellbeing of such systems. In other words, evolution pushes intelligences to become pure replicators in the long run.

Hence we should ask: How can altruism increase the chances of reproduction? A possibility would be for the environment to reward entities that are altruistic. Unfortunately, in the long run we might see that environments that reward altruistic entities produce less efficient entities than environments that don’t. If there are two very similar environments, one which rewards altruism and one which doesn’t, the efficiency of the entities in the latter might become so much higher than in the former that they become able to takeover and destroy whatever mechanism is implementing such reward for altruism in the former. Thus, we suggest to find environments in which rewarding altruism is baked into their very nature, such that similar environments without such reward either don’t exist or are too unstable to exist for the amount of time it takes to evolve non-altruistic entities. This and other similar approaches will be explored further in Part II.

Behaviorism, Functionalism, Non-Materialist Physicalism

A key insight is that the formalization presented above is agnostic about one’s theory of consciousness. We are simply assuming that it’s possible to compute the valence of the system in terms of its state. How one goes about computing such valence, though, will depend on how one maps physical systems to experiences. Getting into the weeds of the countless theories of consciousness out there would not be very productive at this stage, but there is still value in defining the rough outline of kinds of theories of consciousness. In particular, we categorize (physicalist) theories of consciousness in terms of the level of abstraction they identify as the place in which to look for consciousness.

Behaviorism and similar accounts simply associate consciousness to input-output mappings, which can be described, in Marr’s terms, as the computational level of abstraction. In this case, v(s|i) would not depend on the details of the system as much as in what it does from a third person point of view. Behaviorists don’t care what’s in the Chinese Room; all they care about is if the Chinese Room can scribble “I’m in pain” as an output. How we can formalize a mathematical equation to infer whether a system is suffering from a behaviorist point of view is beyond me, but maybe someone might want to give it a shot. As a side note, behaviorists historically were not very concerned about pain or pleasure, and there cause to believe that behaviorism itself might be anti-depressant for people for whom introspection results in more pain than pleasure.

Functionalism (along with computational theories of mind) defines consciousness as the sum-total of the functional properties of systems. In turn, this means that consciousness arises at the algorithmic level of abstraction. Contrary to common misconception, functionalists do care about how the Chinese Room is implemented: contra behaviorists, they do not usually agree that a Chinese Room implemented with a look-up table is conscious.*

As such v(s|i) will depend on the algorithms that the system is implementing. Thus, as an intermediary step, one would need a function that takes the system as an input and returns the algorithms that the system is implementing as an output, A(s). Only once we have A(s) we would then be able to infer the valence of the system. Which algorithms, and for what reason, are in fact hedonically-charged has yet to be clarified. Committed functionalists often associate reinforcement learning with pleasure and pain, and one could imagine that as philosophy of mind gets more rigorous and takes into account more advancements in neuroscience and AI, we will see more hypothesis being made about what kinds of algorithms result in phenomenal pain (and pleasure). There are many (still fuzzy) problems to be solved for this account to work even in principle. Indeed, there is a reason to believe that the question “what algorithms is this system performing?” has no definite answer, and it surely isn’t frame-invariant in the same way that a physical state might be. The fact that algorithms do not carve nature at its joints would imply that consciousness is not really a well-defined element of reality either. But rather than this working as a reductio-ad-absurdum of functionalism, many of its proponents have instead turned around to conclude that consciousness itself is not a natural kind. This does represent an important challenge in order to define the valence of the system, and makes the problem of detecting and avoiding pure replicators extra challenging. Admirably so, this is not stopping some from trying anyway.

We also should note that there are further problems with functionalism in general, including the fact that qualia, the binding problem, and the causal role of consciousness seem underivable from its premises. For a detailed discussion about this, read this article.

Finally, Non-Materialist Physicalism locates consciousness at the implementation level of abstraction. This general account of consciousness refers to the notion that the intrinsic nature of the physical is qualia. There are many related views that for the purpose of this article should be good enough approximations: panpsychism, panexperientialism, neutral monism, Russellian monism, etc. Basically, this view takes seriously both the equations of physics and the idea that what they describe is the behavior of qualia. A big advantage of this view is that there is a matter-of-fact about what a system is composed of. Indeed, both in relativity and quantum mechanics, the underlying nature of a system is frame-invariant, such that its fundamental (intrinsic and causal) properties do not depend on one’s frame of reference. In order to obtain v(s|i) we will need to obtain this frame-invariant description of what the system is in a given state. Thus, we need a function that takes as input physical measurements of the system and returns the best possible approximation to what is actually going on under the hood, Ph(s). And only with this function Ph(s) we would be ready to compute the valence of the system. Now, in practice we might not need a plank-length description of the system, since the mathematical property that describes it’s valence might turn out to be well-approximated with high-level features of it.

The main problem with Non-Materialist Physicalism comes when one considers systems that have similar efficiency metrics, are performing the same algorithms, and look the same in all of the relevant respects from a third-person point, and yet do not have the same experience. In brief: if physical rather than functional aspects of systems map to conscious experiences, it seems likely that we could find two systems that do the same (input-output mapping), do it in the same way (algorithms), and yet one is conscious and the other isn’t.

This kind of scenario is what has pushed many to conclude that functionalism is the only viable alternative, since at this point consciousness would seem epiphenomenal (e.g. Zombies Redacted). And indeed, if this was the case, it would seem to be a mere matter of chance that our brains are implemented with the right stuff to be conscious, since the nature of such stuff is not essential to the algorithms that actually end up processing the information. You cannot speak to stuff, but you can speak to an algorithm. So how do we even know we have the right stuff to be conscious?

The way to respond to this very valid criticism is for Non-Materialist Physicalism to postulate that bound states of consciousness have computational properties. In brief, epiphenomenalism cannot be true. But this does not rule out Non-Materialist Physicalism for the simple reason that the quality of states of consciousness might be involved in processing information. Enter…

The Computational Properties of Consciousness

Let’s leave behaviorism behind for the time being. In what ways do functionalism and non-materialist physicalism differ in the context of information processing? In the former, consciousness is nothing other than certain kinds of information processing, whereas in the latter conscious states can be used for information processing. An example of this falls out of taking David Pearce’s theory of consciousness seriously. In his account, the phenomenal binding problem (i.e. “if we are made of atoms, how come our experience contains many pieces of information at once?”, see: The Combination Problem for Panpsychism) is solved via quantum coherence. Thus, a given moment of consciousness is a definite physical system that works as a unit. Conscious states are ontologically unitary, and not merely functionally unitary.

If this is the case, there would be a good reason for evolution to recruit conscious states to process information. Simply put, given a set of constraints, using quantum coherence might be the most efficient way to solve some computational problems. Thus, evolution might have stumbled upon a computational jackpot by creating neurons whose (extremely) fleeting quantum coherence could be used to solve constraint satisfaction problems in ways that would be more energetically expensive to do otherwise. In turn, over many millions of years, brains got really good at using consciousness in order to efficiently process information. It is thus not an accident that we are conscious, that our conscious experiences are unitary, that our world-simulations use a wide range of qualia varieties, and so on. All of these seemingly random, seemingly epiphenomenal, aspects of our existence happen to be computationally advantageous. Just as using quantum computing for factorizing prime numbers, or for solving problems amenable to annealing might give quantum computers a computational edge over their non-quantum counterparts, so is using bound conscious experiences helpful to outcompete non-sentient animals.

Of course, there is yet no evidence of macroscopic decoherence and the brain is too hot anyway, so on the face of it Pearce’s theory seems exceedingly unlikely. But its explanatory power should not be dismissed out of hand, and the fact that it makes empirically testable predictions is noteworthy (how often do consciousness theorists make precise predictions to falsify their theories?).

Whether it is via quantum coherence, entanglement, invariants of the gauge field, or any other deep physical property of reality, non-materialist physicalism can avert the spectre of epiphenomenalism by postulating that the relevant properties of matter that make us conscious are precisely those that give our brains a computational edge (relative to what evolution was able to find in the vicinity of the fitness landscape explored in our history).

Will Pure Replicators Use Valence Gradients at All?

Whether we work under the assumption of functionalism or non-materialist physicalism, we already know that our genes found happiness and suffering to be evolutionary advantageous. So we know that there is at least a set of constraints, efficiency metrics, and input-output mappings that make both phenomenal pleasure and pain very good algorithms (functionalism) or physical implementations (non-materialist physicalism). But will the parameters necessitated by replicators in the long-term future have these properties? Remember that evolution was only able to explore a restricted state-space of possible brain implementations delimited by the pre-existing gene pool (and the behavioral requirements provided by the environment). So, in one extreme case, it may be the case that a fully optimized brain simply does not need consciousness to solve problems. And in another extreme, it may turn out that consciousness is extraordinarily more powerful when used in an optimal way. Would this be good or bad?

What’s the best case scenario? Well, the absolute best possible case is a case so optimistic and incredibly lucky that if it turned out to be true, it would probably make me believe in a benevolent God (or Simulation). This is the case where it turns out that only positive valence gradients are computationally superior to every other alternative given a set of constraints, input-output mappings, and arbitrary efficiency functions. In this case, the most powerful pure replicators, despite their lack of altruism, will nonetheless be pumping out massive amounts of systems that produce unspeakable levels of bliss. It’s as if the very nature of this universe is blissful… we simply happen to suffer because we are stuck in a tiny wrinkle at the foothills of the optimization process of evolution.

In the extreme opposite case, it turns out that only negative valence gradients offer strict computational benefits under heavy optimization. This would be Hell. Or at least, it would tend towards Hell in the long run. If this happens to be the universe we live in, let’s all agree to either conspire to prevent evolution from moving on, or figure out the way to turn it off. In the long term, we’d expect every being alive (or AI, upload, etc.) to be a zombie or a piece of dolorium. Not a fun idea.

In practice, it’s much more likely that both positive and negative valence gradients will be of some use in some contexts. Figuring out exactly which contexts these are might be both extremely important, and also extremely dangerous. In particular, finding out in advance which computational tasks make positive valence gradients a superior alternative to other methods of doing the relevant computations would inform us about the sorts of cultures, societies, religions, and technologies that we should be promoting in order to give this a push in the right direction (and hopefully out-run the environments that would make negative valence gradients adaptive).

Unless we create a Singleton early on, it’s likely that by default all future entities in the long-term future will be non-altruistic pure replicators. But it is also possible that there are multiple attractors (i.e. evolutionarily stable ecosystems) in which different computational properties of consciousness are adaptive. Thus the case for pushing our evolutionary history in the right direction right now before we give up.

 Coming Next: The Hierarchy of Cooperators

Now that we covered the four worldviews, formalized what it means to be a pure replicator, and analyzed the possible future outcomes based on the computational properties of consciousness (and of valence gradients in particular), we are ready to face the game of reality in its own terms.

Team Consciousness, we need to to get our act together. We need a systematic worldview, availability of states of consciousness, set of beliefs and practices to help us prevent pure replicator takeovers.

But we cannot do this as long as we are in the dark about the sorts of entities, both consciousness-focused and pure replicators, who are likely to arise in the future in response to the selection pressures that cultural and technological change are likely to produce. In Part II of The Universal Plot we will address this and more. Stay tuned…

 



* Rather, they usually claim that, given that a Chinese Room is implemented with physical material from this universe and subject to the typical constraints of this world, it is extremely unlikely that a universe-sized look-up table would be producing the output. Hence, the algorithms that are producing the output are probably highly complex and using information processing with human-like linguistic representations, which means that, by all means, the Chinese Room it very likely understanding what it is outputting.


** Related Work:

Here is a list of literature that points in the direction of Consciousness vs. Pure Replicators. There are countless more worthwhile references, but I think that these ones are about the best:

The Biointelligence Explosion (David Pearce), Meditations on Moloch (Scott Alexander), What is a Singleton? (Nick Bostrom), Coherent Extrapolated Volition (Eliezer Yudkowsky), Simulations of God (John Lilly), Meaningness (David Chapman), The Selfish Gene (Richard Dawkins), Darwin’s Dangerous Idea (Daniel Dennett), Prometheus Rising (R. A. Wilson).

Additionally, here are some further references that address important aspects of this worlview, although they are not explicitly trying to arrive at a big picture view of the whole thing:

Neurons Gone Wild (Kevin Simler), The Age of EM (Robin Hanson), The Mating Mind (Geoffrey Miller), Joyous Cosmology (Alan Watts), The Ego Tunnel (Thomas Metzinger), The Orthogonality Thesis (Stuart Armstrong)

 

Raising the Table Stakes for Successful Theories of Consciousness

What should we expect out of a theory of consciousness?

For a scientific theory of consciousness to have even the slightest chance at being correct it must be able to address- at the very least– the following four questions*:

  1. Why consciousness exists at all (i.e. “the hard problem“; why we are not p-zombies)
  2. How it is possible to experience multiple pieces of information at once in a unitary moment of experience (i.e. the phenomenal binding problem; the boundary problem)
  3. How consciousness exerts the causal power necessary to be recruited by natural selection and allow us to discuss its existence (i.e. the problem of causal impotence vs. causal overdetermination)
  4. How and why consciousness has its countless textures (e.g. phenomenal color, smell, emotions, etc.) and the interdependencies of their different values (i.e. the palette problem)

In addition the theory must be able to generate experimentally testable predictions. In Popper’s sense the theory must make “risky” predictions. In a Bayesian sense the theory must be able to generate predictions that have a much higher likelihood given that the theory is correct versus not so that the a posteriori probabilities of the different hypotheses are substantially different from their priors once the experiment is actually carried out.

As discussed in a previous article most contemporary philosophies of mind are unable to address one or more of these four problems (or simply fail to make any interesting predictions). David Pearce’s non-materialist physicalist idealism (not the schizophrenic word-salad that may seem at first) is one of the few theories that promises to meet this criteria and makes empirical predictions. This theory addresses the above questions in the following way:

(1) Why does consciousness exist?

Consciousness exists because reality is made of qualia. In particular, one might say that physics is implicitly the science that studies the flux of qualia. This would imply that in fact all that exists is a set of experiences whose interrelationships are encoded in the Universal Wavefunction of Quantum Field Theory. Thus we are collapsing two questions (“why does consciousness arise in our universe?” and “why does the universe exist?”) into a single question (“why does anything exist?”). More so, the question “why does anything exist?” may ultimately be solved with Zero Ontology. In other words, all that exists is implied by the universe having literally no information whatsoever. All (apparent) information is local; universally we live in an information-less quantum Library of Babel.

(2) Why and how is consciousness unitary?

Due to the expansion of the universe the universal wavefunction has topological bifurcations that effectively create locally connected networks of quantum entanglement that are unconnected to the rest of reality. These networks meet the criteria of being ontologically unitary while having the potential to hold multiple pieces of information at once. In other words, Pearce’s theory of consciousness postulates that the world is made of a large number of experiences, though the vast majority of them are incredibly tiny and short-lived. The overwhelming bulk of reality is made of decohered micro-experiences which are responsible for most of the phenomena we see in the macroscopic world ranging from solidity to Newton’s laws of motion.

A few of these networks of entanglement are us: you, right now, as a unitary “thin subject” of experience, according to this theory, are one of these networks (cf. Mereological Nihilism). Counter-intuitively, while a mountain is in some sense much bigger than yourself, at a deeper level you are bigger than the biggest object you will find in a mountain. Taking seriously the phenomenal binding problem we have to conclude that a mountain is for the most part just made of fields of decohered qualia, and thus, unlike a given biologically-generated experience, it is not “a unitary subject of experience”. In order to grasp this point it is necessary to contemplate a very extreme generalization of Empty Individualism: not only is it that every moment of a person’s experience is a different subject of experience, but the principle applies to every single network of entanglement in the entire multiverse. Only a tiny minority of these have anything to do with minds representing worlds. And even those that participate in the creation of a unitary experience exist within an ecosystem that gives rise to an evolutionary process in which quintillions of slightly different entanglement networks compete in order to survive in the extreme environments provided by nervous systems. Your particular experience is an entanglement network that evolved in order to survive in the specific brain state that is present right now. In other words, macroscopic experiences are the result of harnessing the computational power of Quantum Darwinism by applying it to a very particular configuration of the CNS. Brain states themselves encode Constraint Satisfaction Problems with the networks of electric gradients across firing neurons in sub-millisecond scales instantiating constraints whose solutions are found with sub-femtosecond quantum darwinism.

(3) How can consciousness be causally efficacious?

Consciousness exerts its causal power by virtue of being the only thing that exists. If anything is causal at all, it must, in the final analysis, be consciousness. No matter one’s ultimate theory of causality, assuming that physics describes the flux of qualia, then what instantiates such causality has to be this very flux.

Even under Eternalism/block view of the universe/Post-Everettian QM you can still meaningfully reconstruct causality in terms of the empirical rules for statistical independence across certain dimensions of fundamental reality. The dimensions that have time-like patterns of statistical independence will subjectively be perceived as being the arrows of time in the multiverse (cf. Timeless Causality).

Now an important caveat with this view of the relationship between qualia and causality is that it seems as if at least a weak version of epiphenomenalism must be true. The inverted spectrum thought experiment (ironically usually used in favor of the existence of qualia) can be used to question the causal power of qualia. This brings us to the fourth point:

(4) How do we explain the countless textures of consciousness?

How and why does consciousness have its countless textures and what determines its interrelationships? Pearce anticipates that someday we will have a Rosetta Stone for translating patterns of entanglement in quantum fields to corresponding varieties of qualia (e.g. colors, smells, sounds, etc.). Now, admittedly it seems far fetched that the different quantum fields and their interplay will turn out to be the source of the different qualia varieties. But is there something that in principle precludes this ontological correspondence? Yes, there are tremendous philosophical challenges here, the most salient of which might be the “being/form boundary”. This is the puzzle concerning why states of being (i.e. networks of entangled qualia) would act a certain way by virtue of their phenomenal character in and of itself (assuming its phenomenal character is what gives them reality to begin with). Indeed, what could possibly attach at a fundamental level the behavior of a given being and its intrinsic subjective texture? A compromise between full-fledged epiphenomenalism and qualia-based causality is to postulate a universal principle concerning the preference for full-spectrum states over highly differentiated ones. Consider for example how negative and positive electric charge “seek to cancel each other out”. Likewise, the Quantum Chromodynamics of quarks inside protons and neutrons works under a similar but generalized principle: color charges seek to cancel/complement each other out and become “white” or “colorless”. This principle would suggest that the causal power of specific qualia values comes from the gradient ascent towards more full-spectrum-like states rather than from the specific qualia values on their own.  If this were to be true, one may legitimately wonder whether hedonium and full-spectrum states are perhaps one and the same thing (cf. Valence structuralism). In some way this account of the “being/form boundary” is similar to process philosophy,  but unlike process philosophy, here we are also taking mereological nihilism and wavefuction monism seriously.

However far-fetched it may be to postulate intrinsic causal properties for qualia values, if the ontological unity of science is to survive, there might be no other option. As we’ve seen, simple “patterns of computation” or “information processing” cannot be the source of qualia, since nothing that isn’t a quantum coherent wavefunction actually has independent existence. Unitary minds cannot supervene on decohered quantum fields. Thus the various kinds of qualia have to be searched for in networks of quantum entanglement; within a physicalist paradigm there is nowhere else for them to be.

Alternative Theories

I am very open to the possibility that other theories of consciousness are able to address these four questions. I have yet to see any evidence of this, though. But, please, change my mind if you can! Does your theory of consciousness rise to the challenge?


* This particular set of criteria was proposed by David Pearce (cf. Qualia Computing in Tucson). I would agree with him that these are crucial questions; indeed they make up the bare minimum that such a theory must satisfy. That said, we can formulate more comprehensive sets of problems to solve. An alternative framework that takes this a little further can be found in Michael Johnson’s book Principia Qualia (Eight Problems for a New Science of Consciousness).

Qualia Computing Attending the 2017 Psychedelic Science Conference

From the 19th to the 24th of April I will be hanging out at Psychedelic Science 2017 (if you are interested in attending but have not bought the tickets: remember you can register until the 14th of February).

12020058_10206806127125111_5414514709501746096_nIn case you enjoy Qualia Computing and you are planning on going, now you can meet the human who is mostly responsible for these articles. I am looking forward to meeting a lot of awesome researchers. If you see me and enjoy what I do, don’t be afraid to say hi.

Why Care About Psychedelics?

Although the study of psychedelics and their effects is not a terminal value here in Qualia Computing, they are instrumental in achieving the main goals. The core philosophy of Qualia Computing is to (1) map out the state-space of possible experiences, (2) identify the computational properties of consciousness, and (3) reverse-engineer valence so as to find the way to stay positive without going insane.

Psychedelic experiences happen to be very informative and useful in making progress towards these three goals. The quality and magnitude of the consciousness alteration that they induce lends itself to exploring these questions. First, the state-space of humanly accessible experiences is greatly amplified once you add psychedelics into the mix. Second, the nature of these experiences is all but computationally dull (cf. alcohol and opioids). On the contrary, psychedelic experiences involve non-ordinary forms of qualia computing: the textures of consciousness interact in non-trivial ways, and it stands to reason that some combinations of these textures will be recruited in the future for more than aesthetic purposes. They will be used for computational purposes, too. And third, psychedelic states greatly amplify the range of valence (i.e. the maximum intensity of both pain and pleasure). They unlock the possibility of experiencing peak bliss as well as intense suffering. This strongly suggests that whatever underpins valence at the fundamental level, psychedelics are able to amplify it to a fantastic (and terrifying) extent. Thus, serious valence research will undoubtedly benefit from psychedelic science.

It is for this reason that psychedelics have been a major topic explored here since the beginning of this project. Here is a list of articles that directly deal with the subject:

List of Qualia Computing Psychedelic Articles

1) Psychophysics For Psychedelic Research: Textures

How do you make a psychophysical experiment that tells you something foundational about the information-processing properties of psychedelic perception? I proposed to use an experimental approach invented by Benjamin J. Balas based on the anatomically-inspired texture analysis and synthesis techniques developed by Eero Simoncelli. In brief, one seeks to determine which summary statistics are sufficient to create perceptual (textural) metamers. In turn, in the context of psychedelic research, this can help us determine which statistical properties are best discriminated while sober and which differences are amplified while on psychedelics.

2) State-Space of Drug Effects

I distributed a survey in which I asked people to rate drug experiences along 60 different dimensions. I then conducted factor analysis on these responses. This way I empirically derived six major latent traits that account more than half of the variance across all drug experiences. Three of these factors are tightly related to valence, which suggests that hedonic-recalibration might benefit from a multi-faceted approach.

3) How to Secretly Communicate with People on LSD

I suggest that control interruption (i.e. the failure of feedback inhibition during psychedelic states) can be employed to transmit information in a secure way to people who are in other states of consciousness. A possible application of this technology might be: You and your friends at Burning Man want to send a secret message to every psychedelic user on a particular camp in such a way that no infiltrated cop is able to decode it. To do so you could instantiate the techniques outlined in this article on a large LED display.

4) The Hyperbolic Geometry of DMT Experiences: Symmetries, Sheets, and Saddled Scenes

This article discusses the phenomenology of DMT states from the point of view of differential geometry. In particular, an argument is provided in favor of the view that high grade psychedelia usually involves a sort of geometric hyperbolization of phenomenal space.

5) LSD and Quantum Measurements: Can you see Schrödinger’s cat both dead and alive on acid?

We provide an empirical method to test the (extremely) wild hypothesis that it is possible to experience “multiple branches of the multiverse at once” on high doses of psychedelics. The point is not to promote a particular interpretation of such experiences. Rather, the points is that we can actually generate predictions from such interpretations and then go ahead and test them.

6) Algorithmic Reduction of Psychedelic States

People report a zoo of psychedelic effects. However, as in most things in life, there may be a relatively small number of basic effects that, when combined, can account for the wide variety of phenomena we actually observe. Algorithmic reductions are proposed as a conceptual framework for analyzing psychedelic experiences. Four candidate main effects are proposed.

7) Peaceful Qualia: The Manhattan Project of Consciousness

Imagine that there was a world-wide effort to identify the varieties of qualia that promote joy and prosocial behavior at the same time. Could these be used to guarantee world peace? By giving people free access to the most valuable and prosocial states of consciousness one may end up averting large-scale conflict in a sustainable way. This articles explores how this investigation might be carried out and proposes organizational principles for such a large-scale research project.

8) Getting closer to digital LSD

Why are the Google Deep Dream pictures so trippy? This is not just a coincidence. People call them trippy for a reason.

9) Generalized Wada-Test

In a Wada-test a surgeon puts half of your brain to sleep and evaluates the cognitive skills of your awake half. Then the process is repeated in mirror image. Can we generalize this procedure? Imagine that instead of just putting a hemisphere to sleep we gave it psychedelics. What would it feel like to be tripping, but only with your right hemisphere? Even more generally: envision a scheme in which one alternates a large number of paired states of consciousness and study their mixtures empirically. This way it may be possible to construct a network of “opinions that states of consciousness have about each other”. Could this help us figure out whether there is a universal scale for subjective value (i.e. valence)?

10) Psychedelic Perception of Visual Textures

In this article I discuss some problems with verbal accounts of psychedelic visuals, and I invite readers to look at some textures (provided in the article) and try to describe them while high on LSD, 2C-B, DMT, etc. You can read some of the hilarious comments already left in there.

11) The Super-Shulgin Academy: A Singularity I Can Believe In

Hard to summarize.

 

Schrödinger’s Neurons: David Pearce at the “2016 Science of Consciousness” conference in Tucson

Abstract:

 

Mankind’s most successful story of the world, natural science, leaves the existence of consciousness wholly unexplained. The phenomenal binding problem deepens the mystery. Neither classical nor quantum physics seem to allow the binding of distributively processed neuronal micro-experiences into unitary experiential objects apprehended by a unitary phenomenal self. This paper argues that if physicalism and the ontological unity of science are to be saved, then we will need to revise our notions of both 1) the intrinsic nature of the physical and 2) the quasi-classicality of neurons. In conjunction, these two hypotheses yield a novel, bizarre but experimentally testable prediction of quantum superpositions (“Schrödinger’s cat states”) of neuronal feature-processors in the CNS at sub-femtosecond timescales. An experimental protocol using in vitro neuronal networks is described to confirm or empirically falsify this conjecture via molecular matter-wave interferometry.

 

For more see: https://www.physicalism.com/

 

(cf. Qualia Computing in Tucson: The Magic Analogy)

 


(Trivia: David Chalmers is one of the attendees of the talk and asks a question at 24:03.)