Wada Test + Phenomenal Puzzles: Testing the Independent Consciousness of Individual Brain Hemispheres

by Quintin Frerichs


One of the most pressing problems in philosophy of mind is solving the so-called ‘problem of other minds‘, the difficulty of proving that agents outside oneself have qualia. A workable solution to the problem of other minds would endow us with the ability to define the moral patienthood of present-day biological entities, evade our solipsistic tendencies, and open the door to truly understanding future nonhuman intelligences, should they prove to be conscious. Even more strangely, it would allow us to evaluate whether dream characters or the products of dissociative identity disorder are separate consciousnesses. Irrevocably proving the existence of qualia in other biological life which lacks the capacity for language and higher-order thought is not, to my knowledge, even conceptually feasible at this time. In the case of two agents with the capacity to communicate and problem solve, however, this solution has been proposed, which requires the agent being tested to prove they have qualia by solving a “phenomenal puzzle”. Crucially, the solution does not require that the two agents experience the same qualia, simply that there exists a mapping between their respective conscious states.

If an agent A wishes to prove the existence of qualia in agent B using the above procedure, then A and B must have the following:

  1. A phenomenal bridge (e.g. a biological neural network that connects your brain to someone else’s brain so that both brains now instantiate a single consciousness).
  2. A qualia calibrator (a device that allows you to cycle through many combinations of qualia values quickly so that you can compare the sensory-qualia mappings in both brains and generate a shared vocabulary for qualia values).
  3. A phenomenal puzzle (as described above).
  4. The right set and setting: the use of a proper protocol.

I contend that there may already be a procedure which can be used to generate a reversible phenomenal bridge between two separate minds: a way to make two minds one and subsequently one mind two. Moving in each of these two directions has apparently been demonstrated; by craniopagus twins connected with a thalamic bridge and by corpus callosotomy separating the two cerebral hemispheres. There is tantalizing evidence in each case that consciousness is being fused or fissioned, respectively. In the case of the Hogan sisters, the apparently unitary mind  has access to sensory information from the sensory organs of each cranium. In the case of separating hemispheres there is some debate: alien hand syndrome has suggested the existence of dual consciousness, while other findings have cast doubt on the existence of two separate consciousnesses. While a surgical procedure for separating the hemispheres is as yet permanent, a chemically-induced separation of the hemispheres via the Wada test may provide new avenues for testing the problem of other minds. While some forms of communication (namely language, which is largely left-lateralized) are impaired by the Wada test, other forms such as singing can be left intact. Thus, I believe a combination of Gazzinaga’s procedure and Gómez Emilsson’s phenomenal puzzle approach, in conjunction with a working qualia calibrator, could demonstrate the existence or absence of dual consciousness in the human mind-brain. A version of the Wada test with higher specificity may also be required, to negate some of the characteristic symptoms of confusion, hemineglect, and loss of verbal comprehension.

 

The procedure (utilizing the state space of color, with agents L and R corresponding to the left and right hemispheres) would be as follows: 

Note: a difficulty of utilizing the below outlined procedure is determining which hemisphere should serve as the benchmark. While often language ability is dominant in the left hemisphere (especially in right-handed individuals) and therefore eliminated when the left hemisphere is inactivated during the Wada test, this is not always the case. In cases where at least some language ability is preserved in each hemisphere, either can reliably serve as the point of comparison. 

  1. Design a phenomenal puzzle, such that the solution corresponds to reporting the number of just noticeable differences required to produce a linear mapping between two locations in the state space of color. 
  2. Separate the left and right visual fields (Gazzaniga).
  3. Sodium amobarbital is administered to the left internal cardioid artery via the femoral artery and EEG confirms inactivation of the left hemisphere. In the LVF a consent checkbox for performing the experiment is given to the right hemisphere, Y/N checked using the left hand.
  4. Similarly, sodium amobarbital is administered to the right internal cardioid artery via the femoral artery and EEG confirms inactivation of the right hemisphere. Consent can be verbally obtained from the left hemisphere. 
  5. With both hemispheres activated, qualia calibration on the state space of color is performed (see: A workable solution to the problem of other minds). 
  6. With R inactivated, the phenomenal puzzle is presented to L without enough time for L to solve the puzzle.
  7. Both hemispheres are activated, and L tells the phenomenal puzzle to LR.
  8. L is inactivated and R attempts to solve the puzzle on its own. When R claims to have solved the puzzle (in writing or song most likely), both hemispheres are again reactivated in order to produce LR. R shares its solution with LR.
  9. R is inactivated, and L shares the solution to the phenomenal puzzle. If the solution is correct, then R is conscious! 

Point-of-view characterization of above procedure (Under the assumption that both hemispheres are, in fact, conscious):

  1. From the perspective of the left brain: A researcher asks “do you consent to the following procedure?” You answer ‘yes’, perhaps wondering if you’ve lost just a part of your computational resources, or created an entirely separate consciousness. A short period of darkness and sedation ensues while consent is obtained from the right brain. Suddenly, the amount of consciousness you’re experiencing expands greatly and new memories are available. The computer screen in front of you rapidly cycles through a series of paired color values. The Qualia Calibrator confirms a match by waiting for consensus of the right motor cortex (in lieu of a button press) and from verbal confirmation of the left hemisphere. It feels like an eye exam at hyper speed: “Color one or color two? Color two or color three?”, but for thousands of colors, many of which you don’t have a name for. Then, you sleep, for some indeterminate amount of time. When you awaken, the researcher explains to you the puzzle to solve. Your consciousness is then expanded again, and you repeat the puzzle to yourself, with the strange feeling that “part of you didn’t know about it”. You go dark again. And when the lights are turn on again, things feel normal, but you have a prominent new memory, the solution to the puzzle. Quickly you check. Take this strange shade of cyan and change it once, twice, three times…yup! That’s the mellow orange you were looking for, and in the same number of “just noticeable differences”.
  2. From the perspective of the right brain: You awaken to a scrollable consent form with a checkbox, and a left-handed mouse. Despite your state of relative confusion and lack of verbal fluency, you’re able to understand the form and check the box. Suddenly, your conscious experience expands and your fluency erupts. The computer screen in front of you rapidly cycles through a series of paired color values. The Qualia Calibrator confirms a match by waiting for consensus of the right motor cortex (in lieu of a button press) and from verbal confirmation of the left hemisphere. It feels like an eye exam at hyper speed: “Color one or color two? Color two or color three?”, but for thousands of colors, many of which you don’t have a name for. Again you sleep, your consciousness is briefly expanded, and you learn of the puzzle you are to solve. How did you learn about it? It is weird, you started “repeating” the puzzle to yourself, with the strange feeling that “part of you already had heard it before”. But either way, now you feel like you have heard it really well. Next, it feels like you took a strong sedative and a memory-loss drug at the same time. Now, in this highly impoverished cognitive state, you have to solve a complicated puzzle. To prove that you exist. Ugh. Fortunately, you have help, in the form of an AI which provides the linear mapping you need to discover, provided you answer how many just noticeable differences occur between each set of two points. Half man and machine collaborate to find the solution, and you commit it to memory. Reunited once more, you “share your findings to yourself”. It turns out you’re conscious. The world now knows: the right hemisphere is conscious on its own when the left one is unconscious. Hooray!

AI Alignment Podcast: On Consciousness, Qualia, and Meaning with Mike Johnson and Andrés Gómez Emilsson

Lucas Perry from the Future of Life Institute recently interviewed my co-founder Mike Johnson and I in his AI Alignment podcast. Here is the full transcript:


Lucas: Hey, everyone. Welcome back to the AI Alignment Podcast. I’m Lucas Perry, and today we’ll be speaking with Andrés Gomez Emilsson and Mike Johnson from the Qualia Research Institute. In this episode, we discuss the Qualia Research Institute’s mission and core philosophy. We get into the differences between and arguments for and against functionalism and qualia realism. We discuss definitions of consciousness, how consciousness might be causal, we explore Marr’s Levels of Analysis, we discuss the Symmetry Theory of Valence. We also get into identity and consciousness, and the world, the is-out problem, what this all means for AI alignment and building beautiful futures.

And then end on some fun bits, exploring the potentially large amounts of qualia hidden away in cosmological events, and whether or not our universe is something more like heaven or hell. And remember, if you find this podcast interesting or useful, remember to like, comment, subscribe, and follow us on your preferred listening platform. You can continue to help make this podcast better by participating in a very short survey linked in the description of wherever you might find this podcast. It really helps. Andrés is a consciousness researcher at QRI and is also the Co-founder and President of the Stanford Transhumanist Association. He has a Master’s in Computational Psychology from Stanford. Mike is Executive Director at QRI and is also a co-founder.

He is interested in neuroscience, philosophy of mind, and complexity theory. And so, without further ado, I give you Mike Johnson and Andrés Gomez Emilsson. So, Mike and Andrés, thank you so much for coming on. Really excited about this conversation and there’s definitely a ton for us to get into here.

Andrés: Thank you so much for having us. It’s a pleasure.

Mike: Yeah, glad to be here.

Lucas: Let’s start off just talking to provide some background about the Qualia Research Institute. If you guys could explain a little bit, your perspective of the mission and base philosophy and vision that you guys have at QRI. If you could share that, that would be great.

Andrés: Yeah, for sure. I think one important point is there’s some people that think that really what matters might have to do with performing particular types of algorithms, or achieving external goals in the world. Broadly speaking, we tend to focus on experience as the source of value, and if you assume that experience is a source of value, then really mapping out what is the set of possible experiences, what are their computational properties, and above all, how good or bad they feel seems like an ethical and theoretical priority to actually make progress on how to systematically figure out what it is that we should be doing.

Mike: I’ll just add to that, this thing called consciousness seems pretty confusing and strange. We think of it as pre-paradigmatic, much like alchemy. Our vision for what we’re doing is to systematize it and to do to consciousness research what chemistry did to alchemy.

Lucas: To sort of summarize this, you guys are attempting to be very clear about phenomenology. You want to provide a formal structure for understanding and also being able to infer phenomenological states in people. So you guys are realists about consciousness?

Mike: Yes, absolutely.

Lucas: Let’s go ahead and lay some conceptual foundations. On your website, you guys describe QRI’s full stack, so the kinds of metaphysical and philosophical assumptions that you guys are holding to while you’re on this endeavor to mathematically capture consciousness.

Mike: I would say ‘full stack’ talks about how we do philosophy of mind, we do neuroscience, and we’re just getting into neurotechnology with the thought that yeah, if you have a better theory of consciousness, you should be able to have a better theory about the brain. And if you have a better theory about the brain, you should be able to build cooler stuff than you could otherwise. But starting with the philosophy, there’s this conception of qualia of formalism; the idea that phenomenology can be precisely represented mathematically. You borrow the goal from Giulio Tononi’s IIT. We don’t necessarily agree with the specific math involved, but the goal of constructing a mathematical object that is isomorphic to a systems phenomenology would be the correct approach if you want to formalize phenomenology.

And then from there, one of the big questions in how you even start is, what’s the simplest starting point? And here, I think one of our big innovations that is not seen at any other research group is we’ve started with emotional valence and pleasure. We think these are not only very ethically important, but also just literally the easiest place to start reverse engineering.

Lucas: Right, and so this view is also colored by physicalism and quality of structuralism and valence realism. Could you explain some of those things in a non-jargony way?

Mike: Sure. Quality of formalism is this idea that math is the right language to talk about qualia in, and that we can get a precise answer. This is another way of saying that we’re realists about consciousness much as people can be realists about electromagnetism. We’re also valence realists. This refers to how we believe emotional valence, or pain and pleasure, the goodness or badness of an experience. We think this is a natural kind. This concept carves reality at the joints. We have some further thoughts on how to define this mathematically as well.

Lucas: So you guys are physicalists, so you think that basically the causal structure of the world is best understood by physics and that consciousness was always part of the game engine of the universe from the beginning. Ontologically, it was basic and always there in the same sense that the other forces of nature were already in the game engine since the beginning?

Mike: Yeah, I would say so. I personally like the frame of dual aspect monism, but I would also step back a little bit and say there’s two attractors in this discussion. One is the physicalist attractor, and that’s QRI. Another would be the functionalist/computationalist attractor. I think a lot of AI researchers are in this attractor and this is a pretty deep question of, if we want to try to understand what value is, or what’s really going on, or if we want to try to reverse engineer phenomenology, do we pay attention to bits or atoms? What’s more real; bits or atoms?

Lucas: That’s an excellent question. Scientific reductionism here I think is very interesting. Could you guys go ahead and unpack though the skeptics position of your view and broadly adjudicate the merits of each view?

Andrés: Maybe a really important frame here is called Marr’s Levels of Analyses. David Marr was a cognitive scientist, wrote a really influential book in the ’80s called On Vision where he basically creates a schema for how to understand knowledge about, in this particular case, how you actually make sense of the world visually. The framework goes as follows: you have three ways in which you can describe an information processing system. First of all, the computational/behavioral level. What that is about is understanding the input-output mapping of an information processing system. Part of it is also understanding the run-time complexity of the system and under what conditions it’s able to perform its actions. Here an analogy would be with an abacus, for example.

On the computational/behavioral level, what an abacus can do is add, subtract, multiply, divide, and if you’re really creative you can also exponentiate and do other interesting things. Then you have the algorithmic level of analysis, which is a little bit more detailed, and in a sense more constrained. What the algorithm level of analysis is about is figuring out what are the internal representations and possible manipulations of those representations such that you get the input output of mapping described by the first layer. Here you have an interesting relationship where understanding the first layer doesn’t fully constrain the second one. That is to say, there are many systems that have the same input output mapping but that under the hood uses different algorithms.

In the case of the abacus, an algorithm might be something whenever you want to add a number you just push a bead. Whenever you’re done with a row, you push all of the beads backs and then you add a bead in the row underneath. And finally, you have the implementation level of analysis, and that is, what is the system actually made of? How is it constructed? All of these different levels ultimately also map onto different theories of consciousness, and that is basically where in the stack you associate consciousness, or being, or “what matters”. So, for example, behaviorists in the ’50s, they may associate consciousness, if they give any credibility to that term, with the behavioral level. They don’t really care what’s happening inside as long as you have extended pattern of reinforcement learning over many iterations.

What matters is basically how you’re behaving and that’s the crux of who you are. A functionalist will actually care about what algorithms you’re running, how is it that you’re actually transforming the input into the output. Functionalists generally do care about, for example, brain imaging, they do care about the high level algorithms that the brain is running, and generally will be very interested in figuring out these algorithms and generalize them in fields like machine learning and digital neural networks and so on. A physicalist associate consciousness at the implementation level of analysis. How the system is physically constructed, has bearings on what is it like to be that system.

Lucas: So, you guys haven’t said that this was your favorite approach, but if people are familiar with David Chalmers, these seem to be the easy problems, right? And functionalists are interested in just the easy problems and some of them will actually just try to explain consciousness away, right?

Mike: Yeah, I would say so. And I think to try to condense some of the criticism we have of functionalism, I would claim that it looks like a theory of consciousness and can feel like a theory of consciousness, but it may not actually do what we need a theory of consciousness to do; specify which exact phenomenological states are present.

Lucas: Is there not some conceptual partitioning that we need to do between functionalists who believe in qualia or consciousness, and those that are illusionists or want to explain it away or think that it’s a myth?

Mike: I think that there is that partition, and I guess there is a question of how principled the partition you can be, or whether if you chase the ideas down as far as you can, the partition collapses. Either consciousness is a thing that is real in some fundamental sense and I think you can get there with physicalism, or consciousness is more of a process, a leaky abstraction. I think functionalism naturally tugs in that direction. For example, Brian Tomasik has followed this line of reasoning and come to the conclusion of analytic functionalism, which is trying to explain away consciousness.

Lucas: What is your guys’s working definition of consciousness and what does it mean to say that consciousness is real.

Mike: It is a word that’s overloaded. It’s used in many contexts. I would frame it as what it feels like to be something, and something is conscious if there is something it feels like to be that thing.

Andrés: It’s important also to highlight some of its properties. As Mike pointed out, “consciousness” is used in many different ways. There’s like eight definitions for the word consciousness, and honestly, all of them are really interesting. Some of them are more fundamental than others and we tend to focus on the more fundamental side of the spectrum for the word. A sense that would be very not fundamental would be consciousness in the sense of social awareness or something like that. We actually think of consciousness much more in terms of qualia; what is it like to be something? What is it like to exist? Some of the key properties of consciousness are as follows: First of all, we do think it exists.

Second, in some sense it has causal power in the sense that the fact that we are conscious matters for evolution, evolution made us conscious for a reason that it’s actually doing some computational legwork that would be maybe possible to do, but just not as efficient or not as conveniently as it is possible with consciousness. Then also you have the property of qualia, the fact that we can experience sights, and colors, and tactile sensations, and thoughts experiences, and emotions, and so on, and all of these are in completely different worlds, and in a sense they are, but they have the property that they can be part of a unified experience that can experience color at the same time as experiencing sound. That sends those different types of sensations, we describe them as the category of consciousness because they can be experienced together.

And finally, you have unity, the fact that you have the capability of experiencing many qualia simultaneously. That’s generally a very strong claim to make, but we think you need to acknowledge and take seriously its unity.

Lucas: What are your guys’s intuition pumps for thinking why consciousness exists as a thing? Why is there a qualia?

Andrés: There’s the metaphysical question of why consciousness exists to begin within. That’s something I would like to punt for the time being. There’s also the question of why was it recruited for information processing purposes in animals? The intuition here is that there are various contrasts that you can have within experience, which can serve a computational role. So, there may be a very deep reason why color qualia or visual qualia is used for information processing associated with sight, and why tactile qualia is associated with information processing useful for touching and making haptic representations, and that might have to do with the actual map of how all the qualia values are related to each other. Obviously, you have all of these edge cases, people who are seeing synesthetic.

They may open their eyes and they experience sounds associated with colors, and people tend to think of those as abnormal. I would flip it around and say that we are all synesthetic, it’s just that the synesthesia that we have in general is very evolutionarily adaptive. The reason why you experience colors when you open your eyes is that that type of qualia is really well suited to represent geometrically a projective space. That’s something that naturally comes out of representing the world with the sensory apparatus like eyes. That doesn’t mean that there aren’t other ways of doing it. It’s possible that you could have an offshoot of humans that whenever they opened their eyes, they experience sound and they use that very well to represent the visual world.

But we may very well be in a local maxima of how different types of qualia are used to represent and do certain types of computations in a very well-suited way. It’s like the intuition behind why we’re conscious, is that all of these different contrasts in the structure of the relationship of possible qualia values has computational implications, and there’s actual ways of using this contrast in very computationally effective ways.

Lucas: So, just to channel the functionalist here, wouldn’t he just say that everything you just said about qualia could be fully reducible to input output and algorithmic information processing? So, why do we need this extra property of qualia?

Andrés: There’s this article, I believe is by Brian Tomasik that basically says, flavors of consciousness are flavors of computation. It might be very useful to do that exercise, where basically you identify color qualia as just a certain type of computation and it may very well be that the geometric structure of color is actually just a particular algorithmic structure, that whenever you have a particular type of algorithmic information processing, you get these geometric state-space. In the case of color, that’s a Euclidean three-dimensional space. In the case of tactile or smell qualia, it might be a much more complicated space, but then it’s in a sense implied by the algorithms that we run. There is a number of good arguments there.

The general approach to how to tackle them is that when it comes down to actually defining what algorithms a given system is running, you will hit a wall when you try to formalize exactly how to do it. So, one example is, how do you determine the scope of an algorithm? When you’re analyzing a physical system and you’re trying to identify what algorithm it is running, are you allowed to basically contemplate 1,000 atoms? Are you allowed to contemplate a million atoms? Where is a natural boundary for you to say, “Whatever is inside here can be part of the same algorithm, but whatever is outside of it can’t.” And, there really isn’t a frame-invariant way of making those decisions. On the other hand, if you ask to see a qualia with actual physical states, there is a frame-invariant way of describing what the system is.

Mike: So, a couple of years ago I posted a piece giving a critique of functionalism and one of the examples that I brought up was, if I have a bag of popcorn and I shake the bag of popcorn, did I just torture someone? Did I just run a whole brain emulation of some horrible experience, or did I not? There’s not really an objective way to determine which algorithms a physical system is objectively running. So this is a kind of an unanswerable question from the perspective of functionalism, whereas with the physical theory of consciousness, it would have a clear answer.

Andrés: Another metaphor here is, let’s say you’re at a park enjoying an ice cream. In this system that I created that has, let’s say isomorphic algorithms to whatever is going on in your brain, the particular algorithms that your brain is running in that precise moment within a functionalist paradigm maps onto a metal ball rolling down one of the paths within these machine in a straight line, not touching anything else. So there’s actually not much going on. According to functionalism, that would have to be equivalent and it would actually be generating your experience. Now the weird thing there is that you could actually break the machine, you could do a lot of things and the behavior of the ball would not change.

Meaning that within functionalism, and to actually understand what a system is doing, you need to understand the counter-factuals of the system. You need to understand, what would the system be doing if the input had been different? And all of a sudden, you end with this very, very gnarly problem of defining, well, how do you actually objectively decide what is the boundary of the system? Even some of these particular states that allegedly are very complicated, the system looks extremely simple, and you can remove a lot of parts without actually modifying its behavior. Then that casts in question whether there is an objective boundary, any known arbitrary boundary that you can draw around the system and say, “Yeah, this is equivalent to what’s going on in your brain,” right now.

This has a very heavy bearing on the binding problem. The binding problem for those who haven’t heard of it is basically, how is it possible that 100 billion neurons just because they’re skull-bound, spatially distributed, how is it possible that they simultaneously contribute to a unified experience as opposed to, for example, neurons in your brain and neurons in my brain contributing to a unified experience? You hit a lot of problems like what is the speed of propagation of information for different states within the brain? I’ll leave it at that for the time being.

Lucas: I would just like to be careful about this intuition here that experience is unified. I think that the intuition pump for that is direct phenomenological experience like experience seems unified, but experience also seems a lot of different ways that aren’t necessarily descriptive of reality, right?

Andrés: You can think of it as different levels of sophistication, where you may start out with a very naive understanding of the world, where you confuse your experience for the world itself. A very large percentage of people perceive the world and in a sense think that they are experiencing the world directly, whereas all the evidence indicates that actually you’re experiencing an internal representation. You can go and dream, you can hallucinate, you can enter interesting meditative states, and those don’t map to external states of the world.

There’s this transition that happens when you realize that in some sense you’re experiencing a world simulation created by your brain, and of course, you’re fooled by it in countless ways, especially when it comes to emotional things that we look at a person and we might have an intuition of what type of person that person is, and that if we’re not careful, we can confuse our intuition, we can confuse our feelings with truth as if we were actually able to sense their souls, so to speak, rather than, “Hey, I’m running some complicated models on people-space and trying to carve out who they are.” There’s definitely a lot of ways in which experience is very deceptive, but here I would actually make an important distinction.

When it comes to intentional content, and intentional content is basically what the experience is about, for example, if you’re looking at a chair, there’s the quality of chairness, the fact that you understand the meaning of chair and so on. That is usually a very deceptive part of experience. There’s another way of looking at experience that I would say is not deceptive, which is the phenomenal character of experience; how it presents itself. You can be deceived about basically what the experience is about, but you cannot be deceived about how you’re having the experience, how you’re experiencing it. You can infer based on a number of experiences that the only way for you to even actually experience a given phenomenal object is to incorporate a lot of that information into a unified representation.

But also, if you just pay attention to your experience that you can simultaneously place your attention in two spots of your visual field and make them harmonized. That’s a phenomenal character and I would say that there’s a strong case to be made to not doubt that property.

Lucas: I’m trying to do my best to channel the functionalist. I think he or she would say, “Okay, so what? That’s just more information processing, and i’ll bite the bullet on the binding problem. I still need some more time to figure that out. So what? It seems like these people who believe in qualia have an even tougher job of trying to explain this extra spooky quality in the world that’s different from all the other physical phenomenon that science has gone into.” It also seems to violate Occam’s razor or a principle of lightness where one’s metaphysics or ontology would want to assume the least amount of extra properties or entities in order to try to explain the world. I’m just really trying to tease out your best arguments here for qualia realism as we do have this current state of things in AI alignment where most people it seems would either try to explain away consciousness, would say it’s an illusion, or they’re anti-realist about qualia.

Mike: That’s a really good question, a really good frame. And I would say our strongest argument revolves around predictive power. Just like centuries ago, you could absolutely be a skeptic about, shall we say, electromagnetism realism. And you could say, “Yeah, I mean there is this thing we call static, and there’s this thing we call lightning, and there’s this thing we call load stones or magnets, but all these things are distinct. And to think that there’s some unifying frame, some deep structure of the universe that would tie all these things together and highly compress these phenomenon, that’s crazy talk.” And so, this is a viable position today to say that about consciousness, that it’s not yet clear whether consciousness has deep structure, but we’re assuming it does, and we think that unlocks a lot of predictive power.

We should be able to make predictions that are both more concise and compressed and crisp than others, and we should be able to make predictions that no one else can.

Lucas: So what is the most powerful here about what you guys are doing? Is it the specific theories and assumptions which you take are falsifiable?

Mike: Yeah.

Lucas: If we can make predictive assessments of these things, which are either leaky abstractions or are qualia, how would we even then be able to arrive at a realist or anti-realist view about qualia?

Mike: So, one frame on this is, it could be that one could explain a lot of things about observed behavior and implicit phenomenology through a purely functionalist or computationalist lens, but maybe for a given system it might take 10 terabytes. And if you can get there in a much simpler way, if you can explain it in terms of three elegant equations instead of 10 terabytes, then it wouldn’t be proof that there exists some crystal clear deep structure at work. But it would be very suggestive. Marr’s Levels of Analysis are pretty helpful here, where a functionalist might actually be very skeptical of consciousness mattering at all because it would say, “Hey, if you’re identifying consciousness at the implementation level of analysis, how could that have any bearing on how we are talking about, how we understand the world, how we’d behave?

Since the implementational level is kind of epiphenomenal from the point of view of the algorithm. How can an algorithm know its own implementation, all it can maybe figure out its own algorithm, and it’s identity would be constrained to its own algorithmic structure.” But that’s not quite true. In fact, there is bearings on one level of analysis onto another, meaning in some cases the implementation level of analysis doesn’t actually matter for the algorithm, but in some cases it does. So, if you were implementing a computer, let’s say with water, you have the option of maybe implementing a Turing machine with water buckets and in that case, okay, the implementation level of analysis goes out the window in terms of it doesn’t really help you understand the algorithm.

But if how you’re using water to implement algorithms is by basically creating this system of adding waves in buckets of different shapes, with different resonant modes, then the implementation level of analysis actually matters a whole lot for what algorithms are … finely tuned to be very effective in that substrate. In the case of consciousness and how we behave, we do think properties of the substrate have a lot of bearings on what algorithms we actually run. A functionalist should actually start caring about consciousness if the properties of consciousness makes the algorithms more efficient, more powerful.

Lucas: But what if qualia and consciousness are substantive real things? What if the epiphenomenonalist true and is like smoke rising from computation and it doesn’t have any causal efficacy?

Mike: To offer a re-frame on this, I like this frame of dual aspect monism better. There seems to be an implicit value judgment on epiphenomenalism. It’s seen as this very bad thing if a theory implies qualia as epiphenomenal. Just to put cards on the table, I think Andrés and I differ a little bit on how we see these things, although I think our ideas also mesh up well. But I would say that under the frame of something like dual aspect monism, that there’s actually one thing that exists, and it has two projections or shadows. And one projection is the physical world such as we can tell, and then the other projection is phenomenology, subjective experience. These are just two sides of the same coin and neither is epiphenomenal to the other. It’s literally just two different angles on the same thing.

And in that sense, qualia values and physical values are really talking about the same thing when you get down to it.

Lucas: Okay. So does this all begin with this move that Descartes makes, where he tries to produce a perfectly rational philosophy or worldview by making no assumptions and then starting with experience? Is this the kind of thing that you guys are doing in taking consciousness or qualia to be something real or serious?

Mike: I can just speak for myself here, but I would say my intuition comes from two places. One is staring deep into the beast of functionalism and realizing that it doesn’t lead to a clear answer. My model is that it just is this thing that looks like an answer but can never even in theory be an answer to how consciousness works. And if we deny consciousness, then we’re left in a tricky place with ethics and moral value. It also seems to leave value on the table in terms of predictions, that if we can assume consciousness as real and make better predictions, then that’s evidence that we should do that.

Lucas: Isn’t that just an argument that it would be potentially epistemically useful for ethics if we could have predictive power about consciousness?

Mike: Yeah. So, let’s assume that it’s 100 years, or 500 years, or 1,000 years in the future, and we’ve finally cracked consciousness. We’ve finally solved it. My open question is, what does the solution look like? If we’re functionalists, what does the solution look like? If we’re physicalists, what does the solution look like? And we can expand this to ethics as well.

Lucas: Just as a conceptual clarification, the functionalists are also physicalists though, right?

Andrés: There is two senses of the word physicalism here. So if there’s physicalism in the sense of like a theory of the universe, that the behavior of matter and energy, what happens in the universe is exhaustively described by the laws of physics, or future physics, there is also physicalism in the sense of understanding consciousness in contrast to functionalism. David Pearce, I think, would describe it as non-materialist physicalist idealism. There’s definitely a very close relationship between that phrasing and dual aspect monism. I can briefly unpack it. Basically non materialist is not saying that the stuff of the world is fundamentally unconscious. That’s something that materialism claims, that what the world is made of is not conscious, is raw matter so to speak.

Andrés: Physicalist, again in the sense of the laws of physics exhaustively describe behavior and idealist in the sense of what makes up the world is qualia or consciousness. The big picture view is that the actual substrate of the universe of quantum fields are fields of qualia.

Lucas: So Mike, you were saying that in the future when we potentially have a solution to the problem of consciousness, that in the end, the functionalists with algorithms and explanations of say all of the easy problems, all of the mechanisms behind the things that we call consciousness, you think that that project will ultimately fail?

Mike: I do believe that, and I guess my gentle challenge to functionalists would be to sketch out a vision of what a satisfying answer to consciousness would be, whether it’s completely explaining it a way or completely explaining it. If in 500 years you go to the local bookstore and you check out consciousness 101, and just flip through it, you look at the headlines and the chapter list and the pictures, what do you see? I think we have an answer as formalists, but I would be very interested in getting the functionalists state on this.

Lucas: All right, so you guys have this belief in the ability to formalize our understanding of consciousness, is this actually contingent on realism or anti realism?

Mike: It is implicitly dependent on realism, that consciousness is real enough to be describable mathematically in a precise sense. And actually that would be my definition of realism, that something is real if we can describe it exactly with mathematics and it is instantiated in the universe. I think the idea of connecting math and consciousness is very core to formalism.

Lucas: What’s particularly interesting here are the you’re making falsifiable claims about phenomenological states. It’s good and exciting that your Symmetry Theory of Valence, which we can get into now has falsifiable aspects. So do you guys want to describe here your Symmetry Theory of Valence and how this fits in and as a consequence of your valence realism?

Andrés: Sure, yeah. I think like one of the key places where this has bearings on is and understanding what is it that we actually want and what is it that we actually like and enjoy. That will be answered in an agent way. So basically you think of agents as entities who spin out possibilities for what actions to take and then they have a way of sorting them by expected utility and then carrying them out. A lot of people may associate what we want or what we like or what we care about at that level, the agent level, whereas we think actually the true source of value is more low level than that. That there’s something else that we’re actually using in order to implement agentive behavior. There’s ways of experiencing value that are completely separated from agents. You don’t actually need to be generating possible actions and evaluating them and enacting them for there to be value or for you to actually be able to enjoy something.

So what we’re examining here is actually what is the lower level property that gives rise even to agentive behavior that underlies every other aspect of experience. These would be a valence and specifically valence gradients. The general claim is that we are set up in such a way that we are basically climbing the valence gradient. This is not true in every situation, but it’s mostly true and it’s definitely mostly true in animals. And then the question becomes what implements valence gradients. Perhaps your intuition is this extraordinary fact that things that have nothing to do with our evolutionary past nonetheless can feel good or bad. So it’s understandable that if you hear somebody scream, you may get nervous or anxious or fearful or if you hear somebody laugh you may feel happy.

That makes sense from an evolutionary point of view, but why would the sound of the Bay Area Rapid Transit, the Bart, which creates these very intense screeching sounds, that is not even within like the vocal range of humans, it’s just really bizarre, never encountered before in our evolutionary past and nonetheless, it has an extraordinarily negative valence. That’s like a hint that valence has to do with patterns, it’s not just goals and actions and utility functions, but the actual pattern of your experience may determine valence. The same goes for a SUBPAC, is this technology that basically renders sounds between 10 and 100 hertz and some of them feel really good, some of them feel pretty unnerving, some of them are anxiety producing and it’s like why would that be the case? Especially when you’re getting two types of input that have nothing to do with our evolutionary past.

It seems that there’s ways of triggering high and low valence states just based on the structure of your experience. The last example I’ll give is very weird states of consciousness like meditation or psychedelics that seem to come with extraordinarily intense and novel forms of experiencing significance or a sense of bliss or pain. And again, they don’t seem to have much semantic content per se or rather the semantic content is not the core reason why they feel that they’re bad. It has to do more with a particular structure that they induce in experience.

Mike: There are many ways to talk about where pain and pleasure come from. We can talk about it in terms of neuro chemicals, opioids, dopamine. We can talk about it in terms of pleasure centers in the brain, in terms of goals and preferences and getting what you want, but all these have counterexamples. All of these have some points that you can follow the thread back to which will beg the question. I think the only way to explain emotional valence, pain and pleasure, that doesn’t beg the question is to explain it in terms of some patterns within phenomenology, just intrinsically feel good and some intrinsically feel bad. To touch back on the formalism brain, this would be saying that if we have a mathematical object that is isomorphic to your phenomenology, to what it feels like to be you, then some pattern or property of this object will refer to or will sort of intrinsically encode you are emotional valence, how pleasant or unpleasant this experiences.

That’s the valence formalism aspect that we’ve come to.

Lucas: So given the valence realism, the view is this intrinsic pleasure, pain axis of the world and this is sort of challenging I guess David Pearce’s view. There are things in experience which are just clearly good seeming or bad seeming. Will MacAskill called these pre theoretic properties we might ascribe to certain kinds of experiential aspects, like they’re just good or bad. So with this valence realism view, this potentiality in this goodness or badness whose nature is sort of self intimatingly disclosed in the physics and in the world since the beginning and now it’s unfolding and expressing itself more so and the universe is sort of coming to life, and embedded somewhere deep within the universe’s structure are these intrinsically good or intrinsically bad valances which complex computational systems and maybe other stuff has access to.

Andrés: Yeah, yeah, that’s right. And I would perhaps emphasize that it’s not only pre-theoretical, it’s pre-agentive, you don’t even need an agent for there to be valence.

Lucas: Right. Okay. This is going to be a good point I think for getting into these other more specific hairy philosophical problems. Could you go ahead and unpack a little bit more this view that pleasure or pain is self intimatingly good or bad that just by existing and experiential relation with the thing its nature is disclosed. Brian Tomasik here, and I think functionalists would say there’s just another reinforcement learning algorithm somewhere before that is just evaluating these phenomenological states. They’re not intrinsically or bad, that’s just what it feels like to be the kind of agent who has that belief.

Andrés: Sure. There’s definitely many angles from which to see this. One of them is by basically realizing that liking, wanting and learning are possible to dissociate, and in particular you’re going to have reinforcement without an associated positive valence. You can have also positive valence without reinforcement or learning. Generally they are correlated but they are different things. My understanding is a lot of people who may think of valence as something we believe matters because you are the type of agent that has a utility function and a reinforcement function. If that was the case, we would expect valence to melt away in states that are non agentive, we wouldn’t necessarily see it. And also that it would be intrinsically tied to intentional content, the aboutness of experience. A very strong counter example is that somebody may claim that really what they truly want this to be academically successful or something like that.

They think of the reward function as intrinsically tied to getting a degree or something like that. I would call that to some extent illusory, that if you actually look at how those preferences are being implemented, that deep down there would be valence gradients happening there. One way to show this would be let’s say the person on the graduation day, you give them an opioid antagonist. The person will subjectively feel that the day is meaningless, you’ve removed the pleasant cream of the experience that they were actually looking for, that they thought all along was tied in with intentional content with the fact of graduating but in fact it was the hedonic gloss that they were after, and that’s kind of like one intuition pump part there.

Lucas: These core problem areas that you’ve identified in Principia Qualia, would you just like to briefly touch on those?

Mike: Yeah, trying to break the problem down into modular pieces with the idea that if we can decompose the problem correctly then the sub problems become much easier than the overall problem and if you collect all the solutions to the sub problem than in aggregate, you get a full solution to the problem of consciousness. So I’ve split things up into the metaphysics, the math and the interpretation. The first question is what metaphysics do you even start with? What ontology do you even try to approach the problem? And we’ve chosen the ontology of physics that can objectively map onto reality in a way that computation can not. Then there’s this question of, okay, so you have your core ontology in this case physics, and then there’s this question of what counts, what actively contributes to consciousness? Do we look at electrons, electromagnetic fields, quarks?

This is an unanswered question. We have hypotheses but we don’t have an answer. Moving into the math, conscious system seemed to have boundaries, if something’s happening inside my head it can directly contribute to my conscious experience. But even if we put our heads together, literally speaking, your consciousness doesn’t bleed over into mine, there seems to be a boundary. So one way of framing this is the boundary problem and one way it’s framing it is the binding problem, and these are just two sides of the same coin. There’s this big puzzle of how do you draw the boundaries of a subject experience. IIT is set up to approach consciousness in itself through this lens that has a certain style of answer, style of approach. We don’t necessarily need to take that approach, but it’s a intellectual landmark. Then we get into things like the state-space problem and the topology of information problem.

If we figured out our basic ontology of what we think is a good starting point and of that stuff, what actively contributes to consciousness, and then we can figure out some principled way to draw a boundary around, okay, this is conscious experience A and this conscious experience B, and they don’t overlap. So you have a bunch of the information inside the boundary. Then there’s this math question of how do you rearrange it into a mathematical object that is isomorphic to what that stuff feels like. And again, IIT has an approach to this, we don’t necessarily ascribe to the exact approach but it’s good to be aware of. There’s also the interpretation problem, which is actually very near and dear to what QRI is working on and this is the concept of if you had a mathematical object that represented what it feels like to be you, how would we even start to figure out what it meant?

Lucas: This is also where the falsifiability comes in, right? If we have the mathematical object and we’re able to formally translate that into phenomenological states, then people can self report on predictions, right?

Mike: Yes. I don’t necessarily fully trust self reports as being the gold standard. I think maybe evolution is tricky sometimes and can lead to inaccurate self report, but at the same time it’s probably pretty good, and it’s the best we have for validating predictions.

Andrés: A lot of this gets easier if we assume that maybe we can be wrong in an absolute sense but we’re often pretty well calibrated to judge relative differences. Maybe you ask me how I’m doing on a scale of one to ten and I say seven and the reality is a five, maybe that’s a problem, but at the same time I like chocolate and if you give me some chocolate and I eat it and that improves my subjective experience and I would expect us to be well calibrated in terms of evaluating whether something is better or worse.

Lucas: There’s this view here though that the brain is not like a classical computer, that it is more like a resonant instrument.

Mike: Yeah. Maybe an analogy here it could be pretty useful. There’s this researcher William Sethares who basically figured out the way to quantify the mutual dissonance between pairs of notes. It turns out that it’s not very hard, all you need to do is add up the pairwise dissonance between every harmonic of the notes. And what that gives you is that if you take for example a major key and you compute the average dissonance between pairs of notes within that major key it’s going to be pretty good on average. And if you take the average dissonance of a minor key it’s going to be higher. So in a sense what distinguishes the minor and a major key is in the combinatorial space of possible permutations of notes, how frequently are they dissonant versus consonant.

That’s a very ground truth mathematical feature of a musical instrument and that’s going to be different from one instrument to the next. With that as a backdrop, we think of the brain and in particular valence in a very similar light that the brain has natural resonant modes and emotions may seem externally complicated. When you’re having a very complicated emotion and we ask you to describe it it’s almost like trying to describe a moment in a symphony, this very complicated composition and how do you even go about it. But deep down the reason why a particular frame sounds pleasant or unpleasant within music is ultimately tractable to the additive per wise dissonance of all of those harmonics. And likewise for a given state of consciousness we suspect that very similar to music the average pairwise dissonance between the harmonics present on a given point in time will be strongly related to how unpleasant the experience is.

These are electromagnetic waves and it’s not exactly like a static or it’s not exactly a standing wave either, but it gets really close to it. So basically what this is saying is there’s this excitation inhibition wave function and that happens statistically across macroscopic regions of the brain. There’s only a discrete number of ways in which that way we can fit an integer number of times in the brain. We’ll give you a link to the actual visualizations for what this looks like. There’s like a concrete example, one of the harmonics with the lowest frequency is basically a very simple one where interviewer hemispheres are alternatingly more excited versus inhibited. So that will be a low frequency harmonic because it is very spatially large waves, an alternating pattern of excitation. Much higher frequency harmonics are much more detailed and obviously hard to describe, but visually generally speaking, the spatial regions that are activated versus inhibited are these very thin wave fronts.

It’s not a mechanical wave as such, it’s a electromagnetic wave. So it would actually be the electric potential in each of these regions of the brain fluctuates, and within this paradigm on any given point in time you can describe a brain state as a weighted sum of all of its harmonics, and what that weighted sum looks like depends on your state of consciousness.

Lucas: Sorry, I’m getting a little caught up here on enjoying resonant sounds and then also the valence realism. The view isn’t that all minds will enjoy resonant things because happiness is like a fundamental valence thing of the world and all brains who come out of evolution should probably enjoy resonance.

Mike: It’s less about the stimulus, it’s less about the exact signal and it’s more about the effect of the signal on our brains. The resonance that matters, the resonance that counts, or the harmony that counts we’d say, or in a precisely technical term, the consonance that counts is the stuff that happens inside our brains. Empirically speaking most signals that involve a lot of harmony create more internal consonance in these natural brain harmonics than for example, dissonant stimuli. But the stuff that counts is inside the head, not the stuff that is going in our ears.

Just to be clear about QRI’s move here, Selen Atasoy has put forth this connectome-specific harmonic wave model and what we’ve done is combined it with our symmetry theory of valence and said this is sort of a way of basically getting a Fourier transform of where the energy is in terms of frequencies of brainwaves in a much cleaner way than has been available through EEG. Basically we can evaluate this data set for harmony. How much harmony is there in a brain, with the link to the Symmetry Theory of Valence then it should be a very good proxy for how pleasant it is to be that brain.

Lucas: Wonderful.

Andrés: In this context, yeah, the Symmetry Theory of Valence would be much more fundamental. There’s probably many ways of generating states of consciousness that are in a sense completely unnatural that are not based on the harmonics of the brain, but we suspect the bulk of the differences in states of consciousness would cash out in differences in brain harmonics because that’s a very efficient way of modulating the symmetry of the state.

Mike: Basically, music can be thought of as a very sophisticated way to hack our brains into a state of greater consonance, greater harmony.

Lucas: All right. People should check out your Principia Qualia, which is the work that you’ve done that captures a lot of this well. Is there anywhere else that you’d like to refer people to for the specifics?

Mike: Principia qualia covers the philosophical framework and the symmetry theory of valence. Andrés has written deeply about this connectome-specific harmonic wave frame and the name of that piece is Quantifying Bliss.

Lucas: Great. I would love to be able to quantify bliss and instantiate it everywhere. Let’s jump in here into a few problems and framings of consciousness. I’m just curious to see if you guys have any comments on ,the first is what you call the real problem of consciousness and the second one is what David Chalmers calls the Meta problem of consciousness. Would you like to go ahead and start off here with just this real problem of consciousness?

Mike: Yeah. So this gets to something we were talking about previously, is consciousness real or is it not? Is it something to be explained or to be explained away? This cashes out in terms of is it something that can be formalized or is it intrinsically fuzzy? I’m calling this the real problem of consciousness, and a lot depends on the answer to this. There are so many different ways to approach consciousness and hundreds, perhaps thousands of different carvings of the problem, panpsychism, we have dualism, we have non materialist physicalism and so on. I think essentially the core distinction, all of these theories sort themselves into two buckets, and that’s is consciousness real enough to formalize exactly or not. This frame is perhaps the most useful frame to use to evaluate theories of consciousness.

Lucas: And then there’s a Meta problem of consciousness which is quite funny, it’s basically like why have we been talking about consciousness for the past hour and what’s all this stuff about qualia and happiness and sadness? Why do people make claims about consciousness? Why does it seem to us that there is maybe something like a hard problem of consciousness, why is it that we experience phenomenological states? Why isn’t everything going on with the lights off?

Mike: I think this is a very clever move by David Chalmers. It’s a way to try to unify the field and get people to talk to each other, which is not so easy in the field. The Meta problem of consciousness doesn’t necessarily solve anything but it tries to inclusively start the conversation.

Andrés: The common move that people make here is all of these crazy things that we think about consciousness and talk about consciousness, that’s just any information processing system modeling its own attentional dynamics. That’s one illusionist frame, but even within qualia realist, qualia formalist paradigm, you still have the question of why do we even think or self reflect about consciousness. You could very well think of consciousness as being computationally relevant, you need to have consciousness and so on, but still lacking introspective access. You could have these complicated conscious information processing systems, but they don’t necessarily self reflect on the quality of their own consciousness. That property is important to model and make sense of.

We have a few formalisms that may give rise to some insight into how self reflectivity happens and in particular how is it possible to model the entirety of your state of consciousness in a given phenomenal object. These ties in with the notion of a homonculei, if the overall valence of your consciousness is actually a signal traditionally used for fitness evaluation, detecting basically when are you in existential risk to yourself or when there’s like reproductive opportunities that you may be missing out on, that it makes sense for there to be a general thermostat of the overall experience where you can just look at it and you get a a sense of the overall well being of the entire experience added together in such a way that you experienced them all at once.

I think like a lot of the puzzlement has to do with that internal self model of the overall well being of the experience, which is something that we are evolutionarily incentivized to actually summarize and be able to see at a glance.

Lucas: So, some people have a view where human beings are conscious and they assume everyone else is conscious and they think that the only place for value to reside is within consciousness, and that a world without consciousness is actually a world without any meaning or value. Even if we think that say philosophical zombies or people who are functionally identical to us but with no qualia or phenomenological states or experiential states, even if we think that those are conceivable, then it would seem that there would be no value in a world of p-zombies. So I guess my question is why does phenomenology matter? Why does the phenomenological modality of pain and pleasure or valence have some sort of special ethical or experiential status unlike qualia like red or blue?

Why does red or blue not disclose some sort of intrinsic value in the same way that my suffering does or my bliss does or the suffering or bliss of other people?

Mike: My intuition is also that consciousness is necessary for value. Nick Bostrom has this wonderful quote in super intelligence that we should be wary of building a Disneyland with no children, some technological wonderland that is filled with marvels of function but doesn’t have any subjective experience, doesn’t have anyone to enjoy it basically. I would just say that I think that most AI safety research is focused around making sure there is a Disneyland, making sure, for example, that we don’t just get turned into something like paperclips. But there’s this other problem, making sure there are children, making sure there are subjective experiences around to enjoy the future. I would say that there aren’t many live research threads on this problem and I see QRI as a live research thread on how to make sure there is subject experience in the future.

Probably a can of worms there, but as your question about in pleasure, I may pass that to my colleague Andrés.

Andrés: Nothing terribly satisfying here. I would go with David Pearce’s view that these properties of experience are self intimating and to the extent that you do believe in value, it will come up as the natural focal points for value, especially if you’re allowed to basically probe the quality of your experience where in many states you believe that the reason why you like something is for intentional content. Again, the case of graduating or it could be the the case of getting a promotion or one of those things that a lot of people associate, with feeling great, but if you actually probe the quality of experience, you will realize that there is this component of it which is its hedonic gloss and you can manipulate it directly again with things like opiate antagonists and if the symmetry theory of valence is true, potentially also by directly modulating the consonance and dissonance of the brain harmonics, in which case the hedonic gloss would change in peculiar ways.

When it comes to consilience, when it comes to many different points of view, agreeing on what aspect of the experience is what brings value to it, it seems to be the hedonic gloss.

Lucas: So in terms of qualia and valence realism, would the causal properties of qualia be the thing that would show any arbitrary mind the self-intimating nature of how good or bad an experience is, and in the space of all possible minds, what is the correct epistemological mechanism for evaluating the moral status of experiential or qualitative states?

Mike: So first of all, I would say that my focus so far has mostly been on describing what is and not what ought. I think that we can talk about valence without necessarily talking about ethics, but if we can talk about valence clearly, that certainly makes some questions in ethics and some frameworks in ethics make much more or less than. So the better we can clearly describe and purely descriptively talk about consciousness, the easier I think a lot of these ethical questions get. I’m trying hard not to privilege any ethical theory. I want to talk about reality. I want to talk about what exists, what’s real and what the structure of what exists is, and I think if we succeed at that then all these other questions about ethics and morality get much, much easier. I do think that there is an implicit should wrapped up in questions about valence, but I do think that’s another leap.

You can accept the valence is real without necessarily accepting that optimizing valence is an ethical imperative. I personally think, yes, it is very ethically important, but it is possible to take a purely descriptive frame to valence, that whether or not this also discloses, as David Pearce said, the utility function of the universe. That is another question and can be decomposed.

Andrés: One framing here too is that we do suspect valence is going to be the thing that matters up on any mind if you probe it in the right way in order to achieve reflective equilibrium. There’s the biggest example of a talk and neuro scientist was giving at some point, there was something off and everybody seemed to be a little bit anxious or irritated and nobody knew why and then one of the conference organizers suddenly came up to the presenter and did something to the microphone and then everything sounded way better and everybody was way happier. There was these very sorrow hissing pattern caused by some malfunction of the microphone and it was making everybody irritated, they just didn’t realize that was the source of the irritation, and when it got fixed then you know everybody’s like, “Oh, that’s why I was feeling upset.”

We will find that to be the case over and over when it comes to improving valence. So like somebody in the year 2050 might come up to one of the connectome-specific harmonic wave clinics, “I don’t know what’s wrong with me,” but if you put them through the scanner we identify your 17th and 19th harmonic in a state of dissonance. We cancel 17th to make it more clean, and then the person who will say all of a sudden like, “Yeah, my problem is fixed. How did you do that?” So I think it’s going to be a lot like that, that the things that puzzle us about why do I prefer these, why do I think this is worse, will all of a sudden become crystal clear from the point of view of valence gradients objectively measured.

Mike: One of my favorite phrases in this context is what you can measure you can manage and if we can actually find the source of dissonance in a brain, then yeah, we can resolve it, and this could open the door for maybe honestly a lot of amazing things, making the human condition just intrinsically better. Also maybe a lot of worrying things, being able to directly manipulate emotions may not necessarily be socially positive on all fronts.

Lucas: So I guess here we can begin to jump into AI alignment and qualia. So we’re building AI systems and they’re getting pretty strong and they’re going to keep getting stronger potentially creating a superintelligence by the end of the century and consciousness and qualia seems to be along the ride for now. So I’d like to discuss a little bit here about more specific places in AI alignment where these views might inform it and direct it.

Mike: Yeah, I would share three problems of AI safety. There’s the technical problem, how do you make a self improving agent that is also predictable and safe. This is a very difficult technical problem. First of all to even make the agent but second of all especially to make it safe, especially if it becomes smarter than we are. There’s also the political problem, even if you have the best technical solution in the world and the sufficiently good technical solution doesn’t mean that it will be put into action in a sane way if we’re not in a reasonable political system. But I would say the third problem is what QRI is most focused on and that’s the philosophical problem. What are we even trying to do here? What is the optimal relationship between AI and humanity and also a couple of specific details here. First of all I think nihilism is absolutely an existential threat and if we can find some antidotes to nihilism through some advanced valence technology that could be enormously helpful for reducing X-risk.

Lucas: What kind of nihilism or are you talking about here, like nihilism about morality and meaning?

Mike: Yes, I would say so, and just personal nihilism that it feels like nothing matters, so why not do risky things?

Lucas: Whose quote is it, the philosophers question like should you just kill yourself? That’s the yawning abyss of nihilism inviting you in.

Andrés: Albert Camus. The only real philosophical question is whether to commit suicide, whereas how I think of it is the real philosophical question is how to make love last, bringing value to the existence, and if you have value on tap, then the question of whether to kill yourself or not seems really nonsensical.

Lucas: For sure.

Mike: We could also say that right now there aren’t many good shelling points for global coordination. People talk about having global coordination and building AGI would be a great thing but we’re a little light on the details of how to do that. If the clear, comprehensive, useful, practical understanding of consciousness can be built, then this may sort of embody or generate new shelling points that the larger world could self organize around. If we can give people a clear understanding of what is and what could be, then I think we will get a better future that actually gets built.

Lucas: Yeah. Showing what is and what could be is immensely important and powerful. So moving forward with AI alignment as we’re building these more and more complex systems, there’s this needed distinction between unconscious and conscious information processing, if we’re interested in the morality and ethics of suffering and joy and other conscious states. How do you guys see the science of consciousness here, actually being able to distinguish between unconscious and conscious information processing systems?

Mike: There are a few frames here. One is that, yeah, it does seem like the brain does some processing in consciousness and some processing outside of consciousness. And what’s up with that, this could be sort of an interesting frame to explore in terms of avoiding things like mind crime in the AGI or AI space that if there are certain computations which are painful then don’t do them in a way that would be associated with consciousness. It would be very good to have rules of thumb here for how to do that. One interesting could be in the future we might not just have compilers which optimize for speed of processing or minimization of dependent libraries and so on, but could optimize for the valence of the computation on certain hardware. This of course gets into complex questions about computationalism, how hardware dependent this compiler would be and so on.

I think it’s an interesting and important long-term frame.

Lucas: So just illustrate here I think the ways in which solving or better understanding consciousness will inform AI alignment from present day until super intelligence and beyond.

Mike: I think there’s a lot of confusion about consciousness and a lot of confusion about what kind of thing the value problem is in AI Safety, and there are some novel approaches on the horizon. I was speaking with Stuart Armstrong the last EA global and he had some great things to share about his model fragments paradigm. I think this is the right direction. It’s sort of understanding, yeah, human preferences are insane. Just they’re not a consistent formal system.

Lucas: Yeah, we contain multitudes.

Mike: Yes, yes. So first of all understanding what generates them seems valuable. So there’s this frame in AI safety we call the complexity value thesis. I believe Eliezer came up with it in a post on Lesswrong. It’s this frame where human value is very fragile in that it can be thought of as a small area, perhaps even almost a point in a very high dimensional space, say a thousand dimensions. If we go any distance in any direction from this tiny point in this high dimensional space, then we quickly get to something that we wouldn’t think of as very valuable. And maybe if we leave everything the same and take away freedom, this paints a pretty sobering picture of how difficult AI alignment will be.

I think this is perhaps arguably the source of a lot of worry in the community, that not only do we need to make machines that won’t just immediately kill us, but that will preserve our position in this very, very high dimensional space well enough that we keep the same trajectory and that possibly if we move at all, then we may enter a totally different trajectory, that we in 2019 wouldn’t think of as having any value. So this problem becomes very, very intractable. I would just say that there is an alternative frame. The phrasing that I’m playing around with here it is instead of the complexity of value thesis, the unity of value thesis, it could be that many of the things that we find valuable, eating ice cream, living in a just society, having a wonderful interaction with a loved one, all of these have the same underlying neural substrate and empirically this is what affective neuroscience is finding.

Eating a chocolate bar activates same brain regions as a transcendental religious experience. So maybe there’s some sort of elegant compression that can be made and that actually things aren’t so starkly strict. We’re not sort of this point in a super high dimensional space and if we leave the point, then everything of value is trashed forever, but maybe there’s some sort of convergent process that we can follow that we can essentialize. We can make this list of 100 things that humanity values and maybe they all have in common positive valence, and positive valence can sort of be reverse engineered. And to some people this feels like a very scary dystopic scenario – don’t knock it until you’ve tried it – but at the same time there’s a lot of complexity here.

One core frame that the idea of qualia formalism and valence realism can offer AI safety is that maybe the actual goal is somewhat different than the complexity of value thesis puts forward. Maybe the actual goal is different and in fact easier. I think this could directly inform how we spend our resources on the problem space.

Lucas: Yeah, I was going to say that there exists standing tension between this view of the complexity of all preferences and values that human beings have and then the valence realist view which says that what’s ultimately good or certain experiential or hedonic states. I’m interested and curious about if this valence view is true, whether it’s all just going to turn into hedonium in the end.

Mike: I’m personally a fan of continuity. I think that if we do things right we’ll have plenty of time to get things right and also if we do things wrong then we’ll have plenty of time for things to be wrong. So I’m personally not a fan of big unilateral moves, it’s just getting back to this question of can understanding what is help us, clearly yes.

Andrés: Yeah. I guess one view is we could say preserve optionality and learn what is, and then from there hopefully we’ll be able to better inform oughts and with maintained optionality we’ll be able to choose the right thing. But that will require a cosmic level of coordination.

Mike: Sure. An interesting frame here is whole brain emulation. So whole brain emulation is sort of a frame built around functionalism and it’s a seductive frame I would say. If whole brain emulations wouldn’t necessarily have the same qualia based on hardware considerations as the original humans, there could be some weird lock in effects where if the majority of society turned themselves into p-zombies then it may be hard to go back on that.

Lucas: Yeah. All right. We’re just getting to the end here, I appreciate all of this. You guys have been tremendous and I really enjoyed this. I want to talk about identity in AI alignment. This sort of taxonomy that you’ve developed about open individualism and closed individualism and all of these other things. Would you like to touch on that and talk about implications here in AI alignment as you see it?

Andrés: Yeah. Yeah, for sure. The taxonomy comes from Daniel Kolak, a philosopher and mathematician. It’s a pretty good taxonomy and basically it’s like open individualism, that’s the view that a lot of meditators and mystics and people who take psychedelics often ascribe to, which is that we’re all one consciousness. Another frame is that our true identity is the light of consciousness, so to speak. So it doesn’t matter in what form it manifests, it’s always the same fundamental ground of being. Then you have the common sense view, it’s called closed individualism. You start existing when you’re born, you stop existing when you die. You’re just this segment. Some religions actually extend that into the future or past with reincarnation or maybe with heaven.

It’s the belief in ontological distinction between you and others while at the same time there is ontological continuity from one moment to the next within you. Finally you have this view that’s called empty individualism, which is that you’re just a moment of experience. That’s fairly common among physicists and a lot of people who’ve tried to formalize consciousness, often they converged on empty individualism. I think a lot of theories of ethics and rationality, like the veil of ignorance as a guide or like how do you define rational decision-making as maximizing the expected utility of yourself as an agent, all of those seem to implicitly be based on closed individualism and they’re not necessarily questioning it very much.

On the other hand, if the sense of individual identity of closed individualism doesn’t actually carve nature at its joints as a Buddhist might say, the feeling of continuity of being a separate unique entity is an illusory construction of your phenomenology that casts in a completely different light how to approach rationality itself and even self interest, right? If you start identifying with the light of consciousness rather than your particular instantiation, you will probably care a lot more about what happens to pigs in factory farms because … In so far as they are conscious they are you in a fundamental way. It matters a lot in terms of how to carve out different possible futures, especially when you get into these very tricky situations like, well what if there is mind melding or what if there is the possibility of making perfect copies of yourself?

All of these edge cases are really problematic from the common sense view of identity, but they’re not really a problem from an open individualist or empty individualist point of view. With all of this said, I do personally think there’s probably a way of combining open individualism with valence realism that gives rise to the next step in human rationality where we’re actually trying to really understand what the universe wants, so to speak. But I would say that there is a very tricky aspect here that has to do with game theory. We evolved to believe in close individualism. The fact that it’s evolutionarily adaptive is obviously not an argument for it being fundamentally true, but it does seem to be some kind of an evolutionarily stable point to believe of yourself as who you can affect the most directly in a causal way, if you define your boundary that way.

That basically gives you focus on the actual degrees of freedom that you do have, and if you think of a society of open individualists, everybody’s altruistically maximally contributing to the universal consciousness, and then you have one close individualist who is just selfishly trying to acquire power just for itself, you can imagine that one view would have a tremendous evolutionary advantage in that context. So I’m not one who just naively advocates for open individualism unreflectively. I think we still have to work out to the game theory of it, how to make it evolutionarily stable and also how to make it ethical. Open question, I do think it’s important to think about and if you take consciousness very seriously, especially within physicalism, that usually will cast huge doubts on the common sense view of identity.

It doesn’t seem like a very plausible view if you actually tried to formalize consciousness.

Mike: The game theory aspect is very interesting. You can think of closed individualism as something evolutionists produced that allows an agent to coordinate very closely with its past and future ourselves. Maybe we can say a little bit about why we’re not by default all empty individualists or open individualists. Empty individualism seems to have a problem where if every slice of conscious experience is its own thing, then why should you even coordinate with your past and future self because they’re not the same as you. So that leads to a problem of defection, and open individualism is everything is the same being so to speak than … As Andrés mentioned that allows free riders, if people are defecting, it doesn’t allow altruist punishment or any way to stop the free ride. There’s interesting game theory here and it also just feeds into the question of how we define our identity in the age of AI, the age of cloning, the age of mind uploading.

This gets very, very tricky very quickly depending on one’s theory of identity. They’re opening themselves up to getting hacked in different ways and so different theories of identity allow different forms of hacking.

Andrés: Yeah, which could be sometimes that’s really good and sometimes really bad. I would make the prediction that not necessarily open individualism in its full fledged form but a weaker sense of identity than closed individualism is likely going to be highly adaptive in the future as people basically have the ability to modify their state of consciousness in much more radical ways. People who just identify with narrow sense of identity will just be in their shells, not try to disturb the local attractor too much. That itself is not necessarily very advantageous. If the things on offer are actually really good, both hedonically and intelligence wise.

I do suspect basically people who are somewhat more open to basically identify with consciousness or at least identify with a broader sense of identity, they will be the people who will be doing more substantial progress, pushing the boundary and creating new cooperation and coordination technology.

Lucas: Wow, I love all that. Seeing closed individualism for what it was has had a tremendous impact on my life and this whole question of identity I think is largely confused for a lot of people. At the beginning you said that open individualism says that we are all one consciousness or something like this, right? For me in identity I’d like to move beyond all distinctions of sameness or differenceness. To say like, oh, we’re all one consciousness to me seems to say we’re all one electromagnetism, which is really to say the consciousness is like an independent feature or property of the world that’s just sort of a ground part of the world and when the world produces agents, consciousness is just an empty identityless property that comes along for the ride.

The same way in which it would be nonsense to say, “Oh, I am these specific atoms, I am just the forces of nature that are bounded within my skin and body” That would be nonsense. In the same way in sense of what we were discussing with consciousness there was the binding problem of the person, the discreteness of the person. Where does the person really begin or end? It seems like these different kinds of individualism have, as you said, epistemic and functional use, but they also, in my view, create a ton of epistemic problems, ethical issues, and in terms of the valence theory, if quality is actually something good or bad, then as David Pearce says, it’s really just an epistemological problem that you don’t have access to other brain states in order to see the self intimating nature of what it’s like to be that thing in that moment.

There’s a sense in which i want to reject all identity as arbitrary and I want to do that in an ultimate way, but then in the conventional way, I agree with you guys that there are these functional and epistemic issues that closed individualism seems to remedy somewhat and is why evolution, I guess selected for it, it’s good for gene propagation and being selfish. But once one sees AI as just a new method of instantiating bliss, it doesn’t matter where the bliss is. Bliss is bliss and there’s no such thing as your bliss or anyone else’s bliss. Bliss is like its own independent feature or property and you don’t really begin or end anywhere. You are like an expression of a 13.7 billion year old system that’s playing out.

The universe is just peopleing all of us at the same time, and when you get this view and you see you as just sort of like the super thin slice of the evolution of consciousness and life, for me it’s like why do I really need to propagate my information into the future? Like I really don’t think there’s anything particularly special about the information of anyone really that exists today. We want to preserve all of the good stuff and propagate those in the future, but people who seek a immortality through AI or seek any kind of continuation of what they believe to be their self is, I just see that all as misguided and I see it as wasting potentially better futures by trying to bring Windows 7 into the world of Windows 10.

Mike: This all gets very muddy when we try to merge human level psychological drives and concepts and adaptations with a fundamental physics level description of what is. I don’t have a clear answer. I would say that it would be great to identify with consciousness itself, but at the same time, that’s not necessarily super easy if you’re suffering from depression or anxiety. So I just think that this is going to be an ongoing negotiation within society and just hopefully we can figure out ways in which everyone can move.

Andrés: There’s an article I wrote it, I just called it consciousness versus replicators. That kind of gets to the heart of this issue, but that sounds a little bit like good and evil, but it really isn’t. The true enemy here is replication for replication’s sake. On the other hand, the only way in which we can ultimately benefit consciousness, at least in a plausible, evolutionarily stable way is through replication. We need to find the balance between replication and benefit of consciousness that makes the whole system stable, good for consciousness and resistant against the factors.

Mike: I would like to say that I really enjoy Max Tegmark’s general frame of you leaving this mathematical universe. One re-frame of what we were just talking about in these terms are there are patterns which have to do with identity and have to do with valence and have to do with many other things. The grand goal is to understand what makes a pattern good or bad and optimize our light cone for those sorts of patterns. This may have some counter intuitive things, maybe closed individualism is actually a very adaptive thing, in the long term it builds robust societies. Could be that that’s not true but I just think that taking the mathematical frame and the long term frame is a very generative approach.

Lucas: Absolutely. Great. I just want to finish up here on two fun things. It seems like good and bad are real in your view. Do we live in heaven or hell?

Mike: Lot of quips that come to mind here. Hell is other people, or nothing is good or bad but thinking makes it so. My pet theory I should say is that we live in something that is perhaps close to heaven as is physically possible. The best of all possible worlds.

Lucas: I don’t always feel that way but why do you think that?

Mike: This gets through the weeds of theories about consciousness. It’s this idea that we tend to think of consciousness on the human scale. Is the human condition good or bad, is the balance of human experience on the good end, the heavenly end or the hellish end. If we do have an objective theory of consciousness, we should be able to point it at things that are not human and even things that are not biological. It may seem like a type error to do this but we should be able to point it at stars and black holes and quantum fuzz. My pet theory, which is totally not validated, but it is falsifiable, and this gets into Bostrom’s simulation hypothesis, it could be that if we tally up the good valence and the bad valence in the universe, that first of all, the human stuff might just be a rounding error.

Most of the value, in this value the positive and negative valence is found elsewhere, not in humanity. And second of all, I have this list in the last appendix of Principia Qualia as well, where could massive amounts of consciousness be hiding in the cosmological sense. I’m very suspicious that the big bang starts with a very symmetrical state, I’ll just leave it there. In a utilitarian sense, if you want to get a sense of whether we live in a place closer to heaven or hell we should actually get a good theory of consciousness and we should point to things that are not humans and cosmological scale events or objects would be very interesting to point it at. This will give a much better clear answer as to whether we live in somewhere closer to heaven or hell than human intuition.

Lucas: All right, great. You guys have been super generous with your time and I’ve really enjoyed this and learned a lot. Is there anything else you guys would like to wrap up on?

Mike: Just I would like to say, yeah, thank you so much for the interview and reaching out and making this happen. It’s been really fun on our side too.

Andrés: Yeah, I think wonderful questions and it’s very rare for an interviewer to have non conventional views of identity to begin with, so it was really fun, really appreciate it.

Lucas: Would you guys like to go ahead and plug anything? What’s the best place to follow you guys, Twitter, Facebook, blogs, website?

Mike: Our website is qualiaresearchinstitute.org and we’re working on getting a PayPal donate button out but in the meantime you can send us some crypto. We’re building out the organization and if you want to read our stuff a lot of it is linked from the website and you can also read my stuff at my blog, opentheory.net and Andrés’ is @qualiacomputing.com.

Lucas: If you enjoyed this podcast, please subscribe, give it a like or share it on your preferred social media platform. We’ll be back again soon with another episode in the AI Alignment series.


Featured image credit: Alex Grey

Thoughts on the ‘Is-Ought Problem’ from a Qualia Realist Point of View

tl;dr If we construct a theory of meaning grounded in qualia and felt-sense, it is possible to congruently arrive at “should” statements on the basis of reason and “is” claims. Meaning grounded in qualia allows us to import the pleasure-pain axis and its phenomenal character to the same plane of discussion as factual and structural observations.

Introduction

The Is-Ought problem (also called “Hume’s guillotine”) is a classical philosophical conundrum. On the one hand people feel that our ethical obligations (at least the uncontroversial ones like “do not torture anyone for no reason”) are facts about reality in some important sense, but on the other hand, rigorously deriving such “moral facts” from facts about the universe appears to be a category error. Is there any physical fact that truly compels us to act in one way or another?

A friend recently asked about my thoughts on this question and I took the time to express them to the best of my knowledge.

Takeaways

I provide seven points of discussion that together can be used to make the case that “ought” judgements often, though not always, are on the same ontological footing as “is” claims. Namely, that they are references to the structure and quality of experience, whose ultimate nature is self-intimating (i.e. it reveals itself) and hence inaccessible to those who lack the physiological apparatus to instantiate it. In turn, we could say that within communities of beings who share the same self-intimating qualities of experience, the is/ought divide may not be completely unbridgeable.


Summaries of Question and Response

Summary of the question:

How does a “should” emerge at all? How can reason and/or principles and/or logic compel us to follow some moral code?

Summary of the response:

  1. If “ought” statements are to be part of our worldview, then they must refer to decisions about experiences: what kinds of experiences are better/worse, what experiences should or should not exist, etc.
  2. A shared sense of personal identity (e.g. Open Individualism – which posits that “we are all one consciousness”) allows us to make parallels between the quality of our experience and the experience of others. Hence if one grounds “oughts” on the self-intimating quality of one’s suffering, then we can also extrapolate that such “oughts” must exist in the experience of other sentient beings and that they are no less real “over there” simply because a different brain is generating them (general relativity shows that every “here and now” is equally real).
  3. Reduction cuts both ways: if the “fire in the equations of physics” can feel a certain way (e.g. bliss/pain) then objective causal descriptions of reality (about e.g. brain states) are implicitly referring to precisely that which has an “ought” quality. Thus physics may be inextricably connected with moral “oughts”.
  4. If one loses sight of the fact that one’s experience is the ultimate referent for meaning, it is possible to end up in nihilistic accounts of meaning (e.g. such as Quine’s Indeterminacy of translation and Dennett’s inclusion of qualia within that framework). But if one grounds meaning in qualia, then suddenly both causality and value are on the same ontological footing (cf. Valence Realism).
  5. To see clearly the nature of value it is best to examine it at its extremes (such as MDMA bliss vs. the pain of kidney stones). Having such experiences illuminates the “ought” aspect of consciousness, in contrast to the typical quasi-anhedonic “normal everyday states of consciousness” that most people (and philosophers!) tend to reason from. It would be interesting to see philosophers discuss e.g. the Is-Ought problem while on MDMA.
  6. Claims that “pleasure and pain, value and disvalue, good and bad, etc.” are an illusion by long-term meditators based on the experience of “dissolving value” in meditative states are no more valid than claims that pain is an illusion by someone doped on morphine. In brief: such claims are made in a state of consciousness that has lost touch with the actual quality of experience that gives (dis)value to consciousness.
  7. Admittedly the idea that one state of consciousness can even refer to (let alone make value judgements about) other states of consciousness is very problematic. In what sense does “reference” even make sense? Every moment of experience only has access to its own content. We posit that this problem is not ultimately unsolvable, and that human concepts are currently mere prototypes of a much better future set of varieties of consciousness optimized for truth-finding. As a thought experiment to illustrate this possible future, consider a full-spectrum superintelligence capable of instantiating arbitrary modes of experience and impartially comparing them side by side in order to build a total order of consciousness.

Full Question and Response

Question:

I realized I don’t share some fundamental assumptions that seemed common amongst the people here [referring to the Qualia Research Institute and friends].

The most basic way I know how to phrase it, is the notion that there’s some appeal to reason and/or principles and/or logic that compels us to follow some type of moral code.

A (possibly straw-man) instance is the notion I associate with effective altruism, namely, that one should choose a career based on its calculable contribution to human welfare. The assumption is that human welfare is what we “should” care about. Why should we? What’s compelling about trying to reconfigure ourselves from whatever we value at the moment to replacing that thing with human welfare (or anything else)? What makes us think we can even truly succeed in reconfiguring ourselves like this? The obvious pitfall seems to be we create some image of “goodness” that we try to live up to without ever being honest with ourselves and owning our authentic desires. IMO this issue is rampant in mainstream Christianity.

More generally, I don’t understand how a “should” emerges within moral philosophy at all. I understand how starting with a want, say happiness, and noting a general tendency, such as I become happy when I help others, that one could deduce that helping others often is likely to result in a happy life. I might even say “I should help others” to myself, knowing it’s a strategy to get what I want. That’s not the type of “should” I’m talking about. What I’m talking about is “should” at the most basic level of one’s value structure. I don’t understand how any amount of reasoning could tell us what our most basic values and desires “should” be.

I would like to read something rigorous on this issue. I appreciate any references, as well as any elucidating replies.

Response:

This is a very important topic. I think it is great that you raise this question, as it stands at the core of many debates and arguments about ethics and morality. I think that one can indeed make a really strong case for the view that “ought” is simply never logically implied by any accurate and objective description of the world (the famous is/ought Humean guillotine). I understand that an objective assessment of all that is will usually be cast as a network of causal and structural relationships. By starting out with a network of causal and structural relationships and using logical inferences to arrive at further high-level facts, one is ultimately bound to arrive at conclusions that themselves are just structural and causal relationships. So where does the “ought” fit in here? Is it really just a manner of speaking? A linguistic spandrel that emerges from evolutionary history? It could really seem like it, and I admit that I do not have a silver bullet argument against this view.

However, I do think that eventually we will arrive at a post-Galilean understanding of consciousness, and that this understanding will itself allow us to point out exactly where- if at all- ethical imperatives are located and how they emerge. For now all I have is a series of observations that I hope can help you develop an intuition for how we are thinking about it, and why our take is original and novel (and not simply a rehashing of previous arguments or appeals to nature/intuition/guilt).

So without further ado I would like to lay out the following points on the table:

  1. I am of the mind that if any kind of “ought” is present in reality it will involve decision-making about the quality of consciousness of subjects of experience. I do not think that it makes sense to talk about an ethical imperative that has anything to do with non-experiential properties of the universe precisely because there would be no one affected by it. If there is an argument for caring about things that have no impact on any state of consciousness, I have yet to encounter it. So I will assume that the question refers to whether certain states of consciousness ought to or ought not to exist (and how to make trade offs between them).
  2. I also think that personal identity is key for this discussion, but why this is the case will make sense in a moment. The short answer is that conscious value is self-intimating/self-revealing, and in order to pass judgement on something that you yourself (as a narrative being) will not get to experience you need some confidence (or reasonable cause) to believe that the same self-intimating quality of experience is present in other narrative orbits that will not interact with you. For the same reasons as (1) above, it makes no sense to care about philosophical zombies (no matter how much they scream at you), but the same is the case for “conscious value p. zombies” (where maybe they experience color qualia but do not experience hedonic tone i.e. they can’t suffer).
  3. A very important concept that comes up again and again in our research is the notion that “reduction cuts both ways”. We take dual aspect monism seriously, and in this view we would consider the mathematical description of an experience and its qualia two sides of the same coin. Now, many people come here and say “the moment you reduce an experience of bliss to a mathematical equation you have removed any fuzzy morality from it and arrived at a purely objective and factual account which does not support an ‘ought ontology'”. But doing this mental move requires you to take the mathematical account as a superior ontology to that of the self-intimating quality of experience. In our view, these are two sides of the same coin. If mystical experiences are just a bunch of chemicals, then a bunch of chemicals can also be a mystical experience. To reiterate: reduction cuts both ways, and this happens with the value of experience to the same extent as it happens with the qualia of e.g. red or cinnamon.
  4. Mike Johnson tends to bring up Wittgenstein and Quine to the “Is-Ought” problem because they are famous for ‘reducing language and meaning’ to games and networks of relationships. But here you should realize that you can apply the concept developed in (3) above just as well to this matter. In our view, a view of language that has “words and objects” at its foundation is not a complete ontology, and nor is one that merely introduces language games to dissolve the mystery of meaning. What’s missing here is “felt sense” – the raw way in which concepts feel and operate on each other whether or not they are verbalized. It is my view that here phenomenal binding becomes critical because a felt sense that corresponds to a word, concept, referent, etc. in itself encapsulates a large amount of information simultaneously, and contains many invariants across a set of possible mental transformations that define what it is and what it is not. More so, felt senses are computationally powerful (rather than merely epiphenomenal). Consider Daniel Tammet‘s mathematical feats achieved by experiencing numbers in complex synesthetic ways that interact with each other in ways that are isomorphic to multiplication, factorization, etc. More so, he does this at competitive speeds. Language, in a sense, could be thought of as the surface of felt sense. Daniel Dennett famously argued that you can “Quine Qualia” (meaning that you can explain it away with a groundless network of relationships and referents). We, on the opposite extreme, would bite the bullet of meaning and say that meaning itself is grounded in felt-sense and qualia. Thus, colors, aromas, emotions, and thoughts, rather than being ultimately semantically groundless as Dennett would have it, turn out to be the very foundation of meaning.
  5. In light of the above, let’s consider some experiences that embody the strongest degree of the felt sense of “ought to be” and “ought not to be” that we know of. On the negative side, we have things like cluster headaches and kidney stones. On the positive side we have things like Samadhi, MDMA, and 5-MEO-DMT states of consciousness. I am personally more certain that the “ought not to be” aspect of experience is more real than the “ought to be” aspect of it, which is why I have a tendency (though no strong commitment) towards negative utilitarianism. When you touch a hot stove you get this involuntary reaction and associated valence qualia of “reality needs you to recoil from this”, and in such cases one has degrees of freedom into which to back off. But when experiencing cluster headaches and kidney stones, this sensation- that self-intimating felt-sense of ‘this ought not to be’- is omnidirectional. The experience is one in which one feels like every direction is negative, and in turn, at its extremes, one feels spiritually violated (“a major ethical emergency” is how a sufferer of cluster headaches recently described it to me). This brings me to…
  6. The apparent illusory nature of value in light of meditative deconstruction of felt-senses. As you put it elsewhere: “Introspectively – Meditators with deep experience typically report all concepts are delusion. This is realized in a very direct experiential way.” Here I am ambivalent, though my default response is to make sense of the meditation-induced feeling that “value is illusory” as itself an operation on one’s conscious topology that makes the value quality of experience get diminished or plugged out. Meditation masters will say things like “if you observe the pain very carefully, if you slice it into 30 tiny fragments per second, you will realize that the suffering you experience from it is an illusory construction”. And this kind of language itself is, IMO, liable to give off the illusion that the pain was illusory to begin with. But here I disagree. We don’t say that people who take a strong opioid to reduce acute pain are “gaining insight into the fundamental nature of pain” and that’s “why they stop experiencing it”. Rather, we understand that the strong opioid changes the neurological conditions in such a way that the quality of the pain itself is modified, which results in a duller, “asymbolic“, non-propagating, well-confined discomfort. In other words, strong opioids reduce the value-quality of pain by locally changing the nature of pain rather than by bringing about a realization of its ultimate nature. The same with meditation. The strongest difference here, I think, would be that opioids are preventing the spatial propagation of pain “symmetry breaking structures” across one’s experience and thus “confine pain to a small spatial location”, whereas meditation does something different that is better described as confining the pain to a small temporal region. This is hard to explain in full, and it will require us to fully formalize how the subjective arrow of time is constructed and how pain qualia can make copies across it. [By noting the pain very quickly one is, I believe, preventing it from building up and then having “secondary pain” which emerges from the cymatic resonance of the various lingering echoes of pain across one’s entire “pseudo-time arrow of experience”.] Sorry if this sounds like word salad, I am happy to unpack these concepts if needed, while also admitting that we are in early stages of the theoretical and empirical development.
  7. Finally, I will concede that the common sense view of “reference” is very deluded on many levels. The very notion that we can refer to an experience with another experience, that we can encode the properties of a different moment of experience in one’s current moment of experience, that we can talk about the “real world” or its “objective ethical values” or “moral duty” is very far from sensical in the final analysis. Reference is very tricky, and I think that a full understanding of consciousness will do some severe violence to our common sense in this area. That, however, is different from the self-disclosing properties of experience such as red qualia and pain qualia. You can do away with all of common sense reference while retaining a grounded understanding that “the constituents of the world are qualia values and their local binding relationships”. In turn, I do think that we can aim to do a decently good job at re-building from the ground up a good approximation of our common sense understanding of the world using “meaning grounded in qualia”, and once we do that we will be in a solid foundation (as opposed to the, admittedly very messy, quasi-delusional character of thoughts as they exist today). Needless to say, this may also need us to change our state of consciousness. “Someday we will have thoughts like sunsets” – David Pearce.

 

Burning Man 2.0: The Eigen-Schelling Religion, Entrainment & Metronomes, and the Eternal Battle Between Consciousness and Replicators

Because our consensus reality programs us in certain destructive directions, we must experience other realities in order to know we have choices.

Anyone who limits her vision to memories of yesterday is already dead.

Lillie Langtry

Last year I wrote a 13,000 word essay about my experience at Burning Man. This year I will also share some thoughts and insights concerning my experience while being brief and limiting myself to seven thousand words. I decided to write this piece stand-alone in such a way that you do not need to have read the previous essay in order to make sense of the present text.


Camp Soft Landing

I have been wanting to attend Burning Man for several years, but last year was the first time I had both the time and resources to do so. Unfortunately I was not able to get a ticket in the main sale, so I thought I would have to wait another year to have the experience. Out of the blue, however, I received an email from someone from Camp Soft Landing asking me if I would be interested in giving a talk at Burning Man in their Palenque Norte speaker series. My immediate response was “I would love to! But I don’t have a ticket and I don’t have a camp.” The message I received in return was “Great! Well, we have extra tickets, and you can stay at our camp.” So just like that I suddenly had the opportunity to not only attend, but also be at a wonderful camp and give a talk about consciousness research.

Full Circle Teahouse

The camp I’ve been a part of turned out to be an extremely good fit for me both as a researcher and as a person. Camp Soft Landing is one of the largest camps at Burning Man, featuring a total of 150 participants every year. Its two main contributions to the playa are the Full Circle Teahouse and Palenque Norte. The Full Circle Teahouse is a place in which we serve adaptogen herbal tea blends and Pu’er tea in a peaceful setting that emphasizes presence, empathy, and listening. It’s also full of pillows and cozy blankets and serves as a place for people who are overwhelmed to calm down or crash after a hectic night. (During training we were advised to expect that some people “may not know where they are or how they got here when they wake up in the early morning” and to “help them get oriented and offer them tea”). Here are a few telling words by the Teahouse founder Annie Oak:

The real secret sauce to our camp’s collective survival has been our focus on the well being of everyone who steps inside Soft Landing. While the ancestral progenitor who occupied our location before us, Camp Above the Limit, ran a lively bar, we made a decision not to serve alcohol in our camp. I enjoy an occasional cocktail, but I believe that the conflating of the gift economy with free alcohol has compromised the public health and social cohesion of Black Rock City. We do not prohibit alcohol at Soft Landing, but we do not permit bars inside our camp. Instead, we run a tea bar at our Tea House for those seeking a place to rest, hydrate and receive compassionate care. We also give away hundreds of gallons of water to Tea House visitors. We don’t want to undermine their self-sufficiency, but we can proactively reduce the number of guests who become ill from dehydration. We keep our Tea House open until Monday after the Burn to help weary people stay alert on the perilous drive back home.

– Doing It Right: Theme Camp Management Insights from Camp Soft Landing

Palenque Norte

Palenque Norte is a speaker series founded by podcaster Lorenzo Hagerty in 2003 (cf. A Brief History of Palenque Norte). A friend described it as “TED for Psychedelic Research at Burning Man” which is pretty accurate. Indeed, looking at a list of Palenque Norte speakers is like browsing a who’s who of the scientific and artistic psychedelic community: Johns Hopkins‘ Roland GriffithsMAPS‘ Rick DoblinHeffter‘s George GreerEFF‘s John GilmoreAnn & Sasha Shulgin (Q&A), DanceSafe‘s Mitchell Gomez, Consciousness Hacking‘ Mikey SiegelPaul DaleyBruce Damer, Will Siu, Emily WilliamsSebastian Job, Alex Grey, Android Jones, and many others. For reference, here was this year’s Palenque Norte schedule:

Thanks to the Full Circle Teahouse and Palenque Norte, the social and memetic composition of Camp Soft Landing is one that is characterized by a mixture of veteran scientists and community builders in their 50s and 60s, science and engineering nerds with advanced degrees in their late 20s and early 30s, and a dash of millennials and Gen-Z-ers in the rationalist/Effective Altruist communities.

lorenzo-sasha-bruce

Lorenzo Hagerty, Sasha Shulgin, and Bruce Damer (Burning Man, Palenque Norte c. 2007)

The people of Camp Soft Landing are near and dear to my heart given that they take consciousness seriously, they have a scientific focus, and they emit a strong intellectual vibe. As a budding qualia researcher myself, I feel completely at home there. As it turns out, this type of vibe is not at all out of place at Burning Man…

Burning Man Attendees

I would hazard the guess that Burning Man attendees are on average much more open to experience, conscientious, cognitively oriented, and psychologically robust than people in the general population. In particular, the combination of conscientiousness and openness to experience is golden. These are people who are not only able to think of crazy ideas, but who are also diligent enough to manifest them in the real world in concrete forms. This may account for the high production value and elaborate nature of the art, music, workshops, and collective activities. While the openness to experience aspect of Burning Man is fairly self-evident (it jumps at you if you do a quick google images search), the conscientiousness aspect may be a little harder to believe. Here I will quote a friend to illustrate this component:

Burning Man is the annual meeting of the recreational logistics community. Or maybe it’s a job interview for CEO: how to deal with broken situations and unexpected constraints in a multi-agent setting, just to survive.

[…]

Things I learned / practiced in the last couple of weeks: truck driving, clever packing, impact driver, attaching bike trailer, pumping gas and filling generators, knots, adding hanging knobs to a whiteboard, tying things with wire, quickly moving tents on the last night, finding rides, using ratchet straps, opening & closing storage container, driving to Treasure Island.

GL

Indeed this may be one of the key barriers of entry that defines the culture of Burning Man and explains why the crazy ideas people have in a given year tend to come back in the form of art in the next year… rather than vanishing into thin air.

There are other key features of the people who attend which can be seen by inspecting the Burning Man Census report. Here is a list of attributes, their baserate for Burners, and the baserate in the general population (for comparison): Having an undergraduate degree (73.6% vs. 32%), holding a graduate degree (31% vs. 10%), being gay/lesbian (8.5% vs. 1.3%), bisexual (10% vs. 1.8%), bicurious (11% vs. ??), polyamorous (20% vs. 5%), mixed race (9% vs. 3%), female (40% vs. 50%), median income (62K vs. 30K), etc.

From a bird’s eye view one can describe Burners as much more: educated, LGBT, liberal or libertarian, “spiritual but not religious”, and more mixed race than the average person. There are many more interesting cultural and demographic attributes that define the population of Black Rock City, but I will leave it at that for now for the sake of brevity. That said, feel free to inspect the following Census graphs for further details:

This slideshow requires JavaScript.


Last year at Burning Man I developed a cluster of new concepts including “The Goldilocks Zone of Oneness” and “Hybrid Vigor in the context of post-Darwinian ethics.” I included my conversation with God and instructions for a guided oneness meditation. This year I continued to use the expanded awareness field of the Playa to further these and other concepts. In what follows I will describe some of the main ideas I experienced and then conclude with a summary of the talk I gave at Palenque Norte. If any of the following sections are too dense or uninteresting please feel free to skip them.

The Universal Eigen-Schelling Religion

On one of the nights a group of friends and I went on a journey following an art car, stopping every now and then to dance and to check out some art. At one point we drove through a large crowd of people and by the time the art car was on the other side, a few people from the group were missing. The question then became “what do we do?” We didn’t agree on a strategy for dealing with this situation before we embarked on the trip. After a couple of minutes we all converged on a strategy: stay near the art car and drive around until we find the missing people. The whole situation had a “lost in space” quality. Finding individual people is very hard since from a distance everyone is wearing roughly-indistinguishable multi-colored blinking LEDs all over their body. But since art cars are large and more distinguishable at a distance, they become natural Schelling points for people to converge on. Schelling points are a natural coordination mechanism in the absence of direct communication channels.

We were thus able to re-group almost in our entirety as a group (with only one person missing, who we finally had to give up on) by independently converging on the meta-heuristic of looking for the most natural Schelling point and finding the rest of the group there. For the rest of the night I kept thinking about how this meta-strategy may play out in the grand scheme of things.

If you follow Qualia Computing you may know that our default view on the nature of ethics is valence utilitarianism. People think they want specific things (e.g. ice-cream, a house, to be rich and famous, etc.) but in reality what they want is the high-valence response (i.e. happiness, bliss, and pleasure) that is triggered by such stimuli. When two people disagree on e.g. whether a certain food is tasty, they are not usually talking about the same experience. For one person, such food could induce high degrees of sensory euphoria, while for the other person, the food may leave them cold. But if they had introspective access to each other’s valence response, the disagreement would vanish (“Ah, I didn’t realize mayo produced such a good feeling for you. I was fixated on the aversive reaction I had to it.”). In other words, disagreements about the value of specific stimuli come down to lack of empathetic fidelity between people rather than a fundamental value mismatch. Deep down, we claim, we all like the same states of consciousness, and our disagreements come from the fact that their triggers vary between people. We call the fixation on the stimuli rather than the valence response the Tyranny of the Intentional Object.

In the grand scheme of things, we posit that advanced intelligences across the multiverse will generally converge on valence realism and valence utilitarianism. This is not an arbitrary value choice; it’s the natural outcome of looking for consistency among one’s disparate preferences and trying to investigate the true nature of conscious value. Insofar as curiosity is evolutionarily adaptive, any sufficiently general and sufficiently curious conscious mind eventually reaches the conclusion that value is a structural feature of conscious states and sheds the illusion of intentionality and closed identity. And while in the context of human history one could point at specific philosophers and scientists that have advanced our understanding of ethics (i.e. Plato, Bentham, Singer, Pearce, etc.) there may be a very abstract but universal way of describing the general tendency of curious conscious intelligences towards valence utilitarianism. It would go like this:

In a physicalist panpsychist paradigm, the vast majority of moments of experience do not occur within intelligent minds and leave no records of their phenomenal character for future minds to examine and inspect. A subset of moments of experience, though, do happen to take place within intelligent minds. We can call these conscious eigen-states because their introspective value can be retroactively investigated and compared against the present moment of experience, which has access to records of past experiences. Humans, insofar as they do not experience large amounts of amnesia, are able to experience a wide range of eigen-states throughout their lives. Thus, within a single human mind, many comparisons between the valence of various states of consciousness can be carried out (this is complicated and not always feasible given the state-dependence of memory). Either way, one could visualize how the information about the relative ranking of experiences is gathered across a Directed Acyclic Graph (DAG) of moments of experience that have partial introspective access to previous moments of experience. Furthermore, if the assumption of continuity of identity is made (i.e. that each moment of experience is witnessed by the same transcendental subject) then each evaluation between pairs of states of consciousness contributes a noisy datapoint to a universal ranking of all experiences and values.

After enough comparisons, a threshold number of evaluated experiences may be crossed, at which point a general theory of value can begin to be constructed. Thus a series of natural Schelling points for “what is universally valuable” become accessible to subsequent moments of experience. One of these focal points is the prevention of suffering throughout the entire multiverse. That is, to avoid experiences that do not like existing, independently of their location in space-time. Likewise, we would see another focal point that adds an imperative to realize experiences that value their own existence (“let the thought forms who love themselves reproduce and populate the multiverse”).

I call this approach to ethics the Eigen-Schelling Religion. Any sapient mind in the multiverse with a general enough ability to reason about qualia and reflect about causality is capable of converging to it. In turn, we can see that many concepts at the core of world religions are built around universal Eigen-Schelling points. Thus, we can rest assured that both the Bodhisattva imperative to eliminate suffering and the Christ “world redeeming” sentiment are reflections of a fundamental converging process to which many other intelligent life-forms have access across the entire multiverse. What I like about this framework is that you don’t need to take anyone’s word for what constitutes wisdom in consciousness. It naturally exists as reflective focal points within the state-space of consciousness itself in a way that transcends time and space.

Entrainment and Metronomes

In A Future for Neuroscience my friend and colleague Mike E. Johnson from the Qualia Research Institute explored how taking seriously the paradigm of Connectome-Specific Harmonic Waves (CSHW) leads us to reinterpret cognitive and personality traits in an entirely new light. In particular, here is what he has to say about emotional intelligence:

EQ (emotional intelligent quotient) isn’t very good as a formal psychological construct- it’s not particularly predictive, nor very robust when viewed from different perspectives. But there’s clearly something there– empirically, we see that some people are more ‘tuned in’ to the emotional & interpersonal realm, more skilled at feeling the energy of the room, more adept at making others feel comfortable, better at inspiring people to belief and action. It would be nice to have some sort of metric here.

I suggest breaking EQ into entrainment quotient (EnQ) and metronome quotient (MQ). In short, entrainment quotient indicates how easily you can reach entrainment with another person. And by “reach entrainment”, I mean how rapidly and deeply your connectome harmonic dynamics can fall into alignment with another’s. Metronome quotient, on the other hand, indicates how strongly you can create, maintain, and project an emotional frame. In other words, how robustly can you signal your internal connectome harmonic state, and how effectively can you cause others to be entrained to it. […] Most likely, these are reasonably positively correlated; in particular, I suspect having a high MQ requires a reasonably decent EnQ. And importantly, we can likely find good ways to evaluate these with CSHW.

This conceptual framework can be useful for making sense of the novel social dynamics that take place in Black Rock City. In particular, as illustrated by the Census responses, most participants are in a very open and emotionally receptive state at Burning Man:

One could say that by feeling safe, welcomed, and accepted at Burning Man, attendees adopt a very high Entrainment Quotient modus operandi. In tandem, we then see large art pieces, art cars, theme camps, and powerful sound systems blasting their unique distinctive emotional signals throughout the Playa. In a sense the entire place looks like an ecosystem of brightly-lit high-energy metronomes trying to attract the attention of a swarm of people in highly open and sensitive states with the potential to be entrained with these metronomes. Since the competition for attention is ferocious, there is not a single metronome that can dominate or totally brainwash you. All it takes for you to get a bad signal out of your head is to walk 50 meters to another place where the vibe will be, in all likelihood, completely different and overwrite the previous state.

This dynamic reaches its ultimate climax the very night of the Burn, as (almost) everyone gathers around the Man in a maximally receptive state, while at the same time every art car and group vibe surrounds the crowd and blasts their unique signals as loud and as intensely as possible all at the same time. This leads to the reification of the collective Burning Man egregore, which manifests as the sum total of all signals and vibes in mass ecstasy.

41166785_1829800053770847_7677685032978219008_o

Night of the Burn (source)

It is worth pointing out that not all of the metronomes in the Playa are created equal. Some art cars, for example, send highly specific and culturally-bound signals (e.g. country music, Simon & Garfunkel, Michael Jackson, etc.). While these metronomes will have their specific followings (i.e. you can always find a group of dedicated Pink Floyd fans) their ability to interface with the general Burner vibe is limited by their specificity and temporal irregularity. The more typical metronomic texture you will find scattered all around the Playa will be art forms that make use of more general patternceutical Schelling points with a stronger and more general metronomic capacity. Of note is the high degree of prevalence of house music and other 110 to 140 bpm (beats per minute) music that is able to entrain your brain from a distance and motivate you to move towards it- whether or not you are able to recognize the particular song. If you listen carefully to e.g. Palenque Norte recordings you will notice the occasional art car driving by, and the music it is blasting will usually have its tempo within that range, with a strong, repeating, and easily recognizable beat structure. I suspect that this tendency is the natural emergent effect of the evolutionary selection pressures that art forms endure from one Burn to another, which benefit patterns that can captivate a lot of human attention in a competitive economy of recreational states of consciousness.

mystic_samskara

Android Jones’ Samskara at Camp Mystic 2017 (an example of the Open Individualist Schelling Vibe – i.e. the religion of the ego-dissolving LSD frequency of consciousness)

And then there are the extremely general metronome strategies that revolve around universal principles. The best example I found of this attention-capturing approach was the aesthetic of oneness, which IMO seemed to reach its highest expression at Camp Mystic:

Inspired by a sense of mystery & wonder, we perceive the consciousness of “We Are All One”. Mystics encourage the enigmatic spirit to explore a deeper connection not only on this planet and all that exists within, but the realm of the entire Universe.

Who are the Mystics? 

At their Wednesday night “White Dance Party” (where you are encouraged to dress in white) Camp Mystic was blasting the strongest vibes of Open Individualism I witnessed this year. I am of the mind that philosophy is the soul of poetry, and that massive party certainly had as its underlying philosophy the vibe of oneness and unity. This vibe is itself a Schelling point in the state-space of consciousness… the religion of the boundary-dissolving LSD frequency is not a random state, but a central hub in the super-highway of the mind. I am glad these focal points made prominent appearances at Burning Man.

Uncontrollable Feedback Loops

It is worth pointing out that at an open field as diverse as Burning Man we are likely to encounter positive feedback systems with both good and bad effects on human wellbeing. An example of a positive feedback loop with bad effects would be the incidents that transpired around the “Carkebab” art installation:

The sculpture consisted of a series of cars piled on top of each other held together by a central pole. The setup was clearly designed to be climbed given the visible handles above the cars leading to a view cart at the top. However, in practice it turned out to be considerably more dangerous and hard to climb than it seemed. Now you may anticipate the problem. If you are told that this art piece is climbable but dangerous, one can easily conjure a mental image of a future event in which someone falls and gets hurt. And as soon as that happens, access to the art installation will be restricted. Thus, one reasons that there is a limited amount of time left in which one will be able to climb the structure. Now imagine a lot of people having that train of thought. As more people realize that an accident is imminent, more people are motivated to climb it before that happens, thus creating an incentive to go as soon as possible, leading to crowding, which in turn increases the chance of an accident. The more people approach the installation, the more imminent the final point seems, and the more pressing it becomes to climb the structure before it becomes off-limits, and the more dangerous it becomes. Predictably, the imminent accident did take place. Thankfully it only involved a broken shoulder rather than something more severe. And yet, why did we let it get to that point? Perhaps in the future we should have methods to detect positive feedback loops like this and put the brakes on before it’s too late…

This leads to the topic of danger:

Counting Microlives

Can Burning Man be a place in which an abolitionist ethic can put down roots for long-term civilizational planning? Let’s briefly examine some of the potential acute, medium-term, and long-term costs of attending. Everyone has a limit, right? Some may want to think: “well, you only live once, let’s have fun”. But if you are one of the few who carries the wisdom, will, and love to move consciousness forward this should not be how you think. What would be an acceptable level of risk that an Effective Altruist should be able to accept to experience the benefits of Burning Man? I think that the critical question here is not “Is Burning Man dangerous?” but rather “How bad is it for you?”

Thankfully actuaries, modern medicine, and economists have already developed a theoretical framework for putting a number on this question. Namely, this is the concept of micromorts (i.e. 1 in a million chance of dying) and its sister concept of microlife (a cost of 1 millionth of a lifespan lost or gained by performing some activity). My preference is that of using microlives because they translate more easily into time and are, IMO, more conceptually straightforward. So here is the question: How many microlives should we be willing to spend to attend Burning Man? 10 microlives? 100 microlives? 1,000 microlives? 10,000 microlives?

Based on the fact that there are many long-term burners still alive I guesstimate that the upper bound cannot possibly be higher than 10,000 or we would know about it already. I.e. the percentage of people who get e.g. skin cancer, lung disease, or die in other ways would probably be already apparent in the community. Alternatively, it’s also possible that a reduced life expectancy as a result of attending e.g. 10+ Burns is an open secret among long-term burners… they see their friends die at an inexplicably higher rate but are too afraid to talk about it honestly. After all, people tend to be very clingy to their main sources of meaning (what we call “emotionally load-bearing activities”) so a large amount of denial can be expected in this domain.

Additionally, discussing Burning Man micromorts might be a particularly touchy and difficult subject for a number of attendees. The reason being that part of the psychological value that Burning Man provides is a felt sense of the confrontation with one’s fragility and mortality. Many older burners seem to have come to terms with their own mortality quite well already. Indeed, perhaps accepting death as part of life may be one of the very mechanisms of action for the reduction in neuroticism caused by intense experiences like psychedelics and Burning Man.

But that is not my jazz. I would personally not want to recommend an activity that costs a lot of microlives to other people in team consciousness. While I want to come to terms with death as much as your next Silicon Valley mystically-inclined nerd, I also recognize that death-acceptance is a somewhat selfish desire. Paradoxically, living a long, healthy, and productive life is one of the best ways for us to improve our chances of helping consciousness-at-large given our unwavering commitment to the eradication of all sentient suffering.

The main acute risks of Burning Man could be summarized as: dehydration, sleep deprivation, ODing (especially via accidental dosing, which is not uncommon, sadly), being run over by large vehicles (especially by art cars, trucks, and RVs), and falling from art or having art fall on you. These risks can be mitigated by the motto of “doing only one stupid thing at a time” (cf. How not to die at Burning Man). It’s ok to climb a medium-sized art piece if you are fully sober, or to take a psychedelic if you have sitters and don’t walk around art cars, etc. Most stories of accidents one hears about start along the lines of: “So, I was drunk, and high, and on mushrooms, and holding my camera, and I decided to climb on top of the thunderdome, and…”. Yes, of course that went badly. Doing stupid things on top of each other has multiplicative risk effects.

In the medium term, a pretty important risk is that of being busted by law enforcement. After all, the financial, psychological, and physiological effects of going to prison are rather severe on most people. On a similar note, a non-deadly but psychologically devastating danger of living in the desert for a week is an increased risk of kidney stones due to dehydration. The 10/10 pain you are likely to experience while passing a kidney stone may have far-reaching traumatic effects on one’s psyche and should not be underestimated (sufferers experience an increased risk of heart disease and, I would suspect, suicide).

But of all of the risks, the ones that concern me the most are the long term ones given their otherwise silent nature. In particular, we have skin cancer due to UV exposure and lung/heart disease caused by high levels of PM2.5 particles. With respect to the skin component, it is worth observing that a large majority of Burning Man attendees are caucasian and thus at a significantly higher risk. Me being a redhead, I’ve taken rather extreme precautions in this area. I apply SPF50+ sunscreen every couple of hours, use a wide-rim hat, wear arm sleeves [and gloves] for UV sun protection, wear sunglasses, stay in the shade as often as I can, etc. I recommend that other people also follow these precautions.

And with regards to dust… here I would have to say we have the largest error bars. Does Burning Man dust cause lung cancer? Does it impair lung function? Does it cause heart disease? As far as I can tell nobody knows the answer to these questions. A lot of people seem to believe that the air-borne particles are too large to pose a problem, but I highly doubt that is the case. The only source I’ve been able to find that tried to quantify dangerous particles at Burning Man comes from Camp Particle, which unfortunately does not seem to have published its results (and only provides preliminary data without the critical measure of PM2.5 I was looking for). Here are two important thoughts in this area. First, let’s hope that the clay-like alkaline composition of Playa dust turns out to be harmless to the lungs. And second, like most natural phenomena, chances are that the concentration of dangerous particles in, e.g. 1 minute buckets, follows a power law. I would strongly expect that at least 80% of the dust one inhales comes from 20% of the time in which it is most present. More so, during dust storms and especially in white-outs, I would expect the concentration of dust in the air to be at least 1,000 times higher than the median concentration. If that’s true, breathing without protection during a white-out for as little as two minutes would be equivalent to breathing in “typical conditions” without protection for more than 24 hours. In other words, being strategic and diligent about wearing a heavy and cumbersome PN100 mask may be far more effective than lazily taking on and off a more convenient (but less effective) mask throughout the day. Personally, I chose to always have on hand an M3 half facepiece with PN100 filters ready in case the dust suddenly became thicker. This did indeed save me from breathing dust during all dust storms. The difference in the quality of air while wearing it was like day and night. I will also say that while I prefer my look when I have a beard, I chose to fully shave during the event in order to guarantee a good seal with the mask. In retrospect, the fashion sacrifice does seem to be worth it, though at the time I certainly missed having a beard.

3m-half-facepiece-respirator-welding-particulate-filter-d26.jpg

The question remaining is: with a realistic amount of protection, what is the acceptable level of risk? I propose that you make up your mind before we find out with science how dangerous Burning Man actually is. In my case, I am willing to endure up to 100 negative microlives per day at Burning Man (for a total of ~800 microlives) as the absolute upper bound. Anything higher than that and the experience wouldn’t be worth it for me, and I would not recommend it to memetic allies. Thankfully, I suspect that the actual danger is lower than that, perhaps in the range of 40 negative microlives per day (mostly in the form of skin cancer and lung disease). But the problem remains that this estimate has very wide error bars. This needs to be addressed.

And if the danger does turn out to be unacceptable, then we can still look to recreate the benefits of Burning Man in a safer way: Your Legacy Could Be To Move Burning Man to a Place With A Fraction of Its Micromorts Cost.

Dangerous Bonding

In the ideal case Burning Man would be an event that triggers our brains to produce “danger signals” without there actually being much danger at all. This is because with our current brain implementation, experiencing perceived danger is helpful for bonding, trust building, and a sense of self-efficacy and survival ability.

And now on to my talk…

Andrés Gómez Emilsson – Consciousness vs. Replicators

The video above documents my talk, which includes an extended Q&A with the audience. Below is a quick summary of the main points I touched throughout the talk:

  1. Intro to Qualia Computing
    1. I started out by asking the audience if they had read any Qualia Computing articles. About 30% of them raised a hand. I then asked them how they found out about my talk, and it seems that the majority of the attendees (50%+) found it through the “What Where When” booklet. Since the majority of the people didn’t know about Qualia Computing before the talk, I decided to provide a quick introduction to some of the main concepts:
      1. What is qualia? – The raw way in which consciousness feels. Like the blueness of blue. Did you ever wonder as a kid whether other people saw the same colors as you? Qualia is that ineffable quality of experience that we currently struggle to communicate.
      2. Personal Identity:
        1. Closed Individualism – you start existing when you are born, stop existing when you die.
        2. Empty Individualism – brains are “experience machines” and you really are just a “moment of experience” disconnected from every other “moment of experience” your brain has generated or will generate.
        3. Open Individualism – we are all the “light of consciousness”. Reality has only one numerically identical subject of experience who is everyone, but which takes all sorts of forms and shapes.
        4. For the purpose of this talk I assume that Open Individualism is true, which provides a strong reason to care about the wellbeing of all sentient beings, even from a “selfish” point of view.
      3. Valence – This is the pleasure-pain axis. We take a valence realist view which means that we assume that there is an objective matter of fact about how much an experience is in pain/suffering vs. experiencing happiness/pleasure. There are pure heavenly experiences, pure hellish experiences, mixed states (e.g. enjoying music you love on awful speakers while wanting to pee), and neutral states (e.g. white noise, mild apathy, etc.).
      4. Evolutionary advantages of consciousness as part of the information processing pipeline – I pointed out that we also assume that consciousness is a real and computationally relevant phenomena. And in particular, that the reason why consciousness was recruited by natural selection to process information has to do with “phenomenal binding”. I did not go into much detail about it at the time, but if you are curious I elaborated about this during the Q&A.
  2. Spirit of our research:
    1. Exploration + Knowledge/Synthesis. Many people either over-focus on exploration (especially people very high in openness to experience) or on synthesis (like conservatives who think “the good days are gone, let’s study history”). The spirit of our research combines both open-ended exploration and strong synthesis. We encourage people to both expand their evidential base and make serious time to synthesize and cross-examine their experiences.
    2. A lot of people treat consciousness research like people used to treat alchemy. That is, they have a psychological need to “keep things magical”. We don’t. We think that consciousness research is due to transition into a hard science and that many new possibilities will be unlocked after this transition, not unlike how chemistry is thousands of times more powerful than alchemy because it allows you to create synthesis pathways from scratch using chemistry principles.
  3. How People Think and Why Few Say Meaningful Things:
    1. What most people say and talk about is a function of the surrounding social status algorithm (i.e. what kind of things award social recognition) and deep-seated evolutionarily adaptive programs (such as survival, reproductive, and affective consistency programs).
    2. Nerds and people on the autism spectrum do tend to circumvent this general mental block and are able to discuss things without being motivated by status or evolutionary programs only, instead being driven by open-ended curiosity. We encourage our collaborators to have that approach to consciousness research.
  4. What the Economy is Based on:
    1. Right now there are three main goods that are exchanged in the global economy. These are:
      1. Survival – resources that help you survive, like food, shelter, safety, etc.
      2. Power – resources that allow you to acquire social and physical power and thus increase your chances of reproducing.
      3. Consciousness – information about the state-space of consciousness. Right now people are willing to spend their “surplus” resources on experiences even if they do not increase their reproductive success. A possible dystopian scenario is one in which people do not do this anymore – everyone spends all of their available time and energy pursuing jobs for the sake of maximizing their wealth and increasing their reproductive success. This leads us to…
  5. Pure Replicators – In Wireheading Done Right we introduced the concept of a Pure ReplicatorI will define a pure replicator, in the context of agents and minds, to be an intelligence that is indifferent towards the valence of its conscious states and those of others. A pure replicator invests all of its energy and resources into surviving and reproducing, even at the cost of continuous suffering to themselves or others. Its main evolutionary advantage is that it does not need to spend any resources making the world a better place. (e.g. crystals, viruses, programs, memes, genes)
    1. It is reasonable to expect that in the absence of evolutionary selection pressures that favor the wellbeing of sentient beings, in the long run everyone alive will be playing a Pure Replicator strategy.
  6. States vs. Stages vs. Theory of Morality
    1. Ken Wilber emphasizes that there is a key difference between states and stages. Whereas states of consciousness involve various degrees of oneness and interconnectedness (from normal everyday sober experiences all the way to unity consciousness and satori), how you interpret these states will ultimately depend on your own level of moral development and maturity. This is very true and important. But I propose a further axis:
    2. Levels of intellectual understanding of ethics. While stages of consciousness refer to the degree to which you are comfortable with ambiguity, can synthesize large amounts of seemingly contradictory experiences, and are able to be emotionally stable in the face of confusion, we think that there is another axis worth exploring that has more to do with one’s intellectual model of ethics.
    3. The 4 levels are:
      1. Good vs. evil – the most common view which personifies/essentializes evil (e.g. “the devil”)
      2. Balance between good and evil – the view that most people who take psychedelics and engage in eastern meditative practices tend to arrive at. People at this level tend to think that good implies evil, and that the best we can do is to reach a state of balance and equanimity. I argue that this is a rationalization to be able to deal with extremes of suffering; the belief itself is used as an anti-depressant, which shows the intrinsic contradictoriness and motivated reasoning behind adopting this ethical worldview. You believe in the balance between good and evil in general so that you, right now, can feel better about your life. You are still, implicitly, albeit in a low-key way, trying to regulate your mood like everyone else.
      3. Gradients of wisdom – this is the view that people like Sam Harris, Ken Wilber, John Lilly, David Chapman, Buddha, etc. seem to converge on. They don’t have a deontological “if-then” ethical programming like the people at the first level. Rather, they have general heuristics and meta-heuristics for navigating complex problems. They do not claim to know “the truth” or be able to identify exactly what makes a society “better for human flourishing” but they do accept that some environments and states of consciousness are more healthy and conducive to wisdom than others. The problem with this view is that it does not give you a principled way to resolve disagreements or a way forward for designing societies from first principles.
      4. Consciousness vs. pure replicators – this view is the culmination of intellectual ethical development (although you could still be very neurotic and unenlightened otherwise) which arises when one identifies the source of everything that is systematically bad as caused by patterns that are good at making copies of themselves but that either don’t add conscious value or actively increase suffering. In this framework, it is possible for consciousness to win, which would happen if we create a full-spectrum super-sentient super-intelligent singleton that explores the entire state-space of consciousness and rationally decides what experiences to instantiate at a large scale based on the empirically revealed total order of consciousness.
  7. New Reproductive Strategies
    1. Given that we on team consciousness are in a race against Pure Replicator Hell scenarios it is important to explore ways in which we could load the dice in the favor of consciousness. One way to do so would be to increase the ways in which prosocial people are able to reproduce and pass on their pro-consciousness genes going forward. Here are a few interesting examples:
      1. Gay + Lesbian couple – for gay and lesbian couples with long time horizons we could help them have biological kids with the following scheme: Gay couple A + B and lesbian couple X + Z could combine their genes and have 4 kids A/X, A/Z, B/X, B/Z. This would create the genetic and game-theoretical incentives for this new kind of family structure to work in the long term.
      2. Genetic spellchecking – one of the most promising ways of increasing sentient welfare is to apply genetic spellchecking to embryos. This means that we would be reducing the mutational load of one’s offspring without compromising one’s genetic payload (and thus selfish genes would agree to the procedure and lead to an evolutionarily stable strategy). You wouldn’t ship code to production without testing and debugging, you wouldn’t publish a book without someone proof-reading it first, so why do we push genetic code to production without any debugging? As David Pearce says, right now every child is a genetic experiment. It’s terrible that such a high percentage of them lead to health and mental problems.
      3. A reproductive scheme in which 50% of the genes come from an “intelligently vetted gene pool” and the other 50% come from the parents’ genes. This would be very unpopular at first, but after a generation or two we would see that all of the kids who are the result of this procedure are top of the class, win athletic competitions, start getting Nobel prizes and Fields medals, etc. So soon every parent will want to do this… and indeed from a selfish gene point of view there will be no option but to do so, as it will make the difference between passing on some copies vs. none.*
      4. Dispassionate evaluation of the merits and drawbacks of one’s genes in a collective of 100 or more people where one recombines the genetic makeup of the “collective children” in order to maximize both their wellbeing and the information gained. In order to do this analysis in a dispassionate way we might need to recruit 5-meo-dmt-like states of consciousness that make you identify with consciousness rather than with your particular genes, and also MDMA-like states of mind in order to create a feeling of connection to source and universal love even if your own patterns lose out at some point… which they will after long enough, because eventually the entire gene pool would be replaced by a post-human genetic make-up.
  8. Consciousness vs. Replicators as a lens – I discussed how one can use the 4th stage of intellectual ethical development as a lens to analyze the value of different patterns and aesthetics. For example:
    1. Conservatives vs. Liberals (stick to your guns and avoid cancer vs. be adaptable but expose yourself to nasty dangers)
    2. Rap Music vs. Classical or Electronic music (social signaling vs. patternistic valence exploration)
  9. Hyperstition – Finally, I discussed the concept of hyperstition, which is a concept that refers to “ideas that make themselves real”. I explored it in the first Burning Man article. The core idea is that states of consciousness can indeed transform the history of the cosmos. In particular, high-energy states of mind like those experienced under psychedelics allow for “bigger ideas” and thus increase the upper bound of “irreducible complexity” for one’s thoughts. An example of this is coming up with further alternative reproductive strategies, which I encouraged the audience to do in order to increase the chances that team consciousness wins in the long term…

The End.


Bonus content: things I overheard virgin burners say:

  • “Intelligent people build intelligent civilizations. I now get what a society made of brilliant people would look like.”
  • “Burning Man is a magical place. It seems like it is one of the only places on Earth where the Spirit World and the Physical World intersect and play with each other.”
  • “It is not every day that you engage in a deeply transformative conversation before breakfast.”

* Thanks to Alison Streete for this idea.

Qualia Computing at Burning Man 2018: “Consciousness vs Replicators” talk

I’m thrilled to announce that I will be going to Burning Man for the second time this year. I will give a talk about Consciousness vs. Pure Replicators. The talk will be at Palenque Norte‘s consciousness-focused speaker series hosted by Camp Soft Landing.


The whole experience last year was very eye-opening, and as a result I wrote an (extremely) long essay about it. The essay introduces a wide range of entirely new concepts, including “The Goldillocks Zone of Oneness” and “Hybrid Vigor in the context of post-Darwinian ethics.” It also features a section about my conversation with God at Burning Man.

If you are attending Burning Man and would like to meet with me, I will be available for chatting and hanging out right after my talk (call it the Qualia Research Institute Office Hours at Burning Man).


Here are the details of the talk:

Andrés Gómez Emilsson-Consciousness vs Replicators

Date and Time: Wednesday, August 29th, 2018, 3 PM – 4:30 PM
Type: Class/Workshop
Located at CampCamp Soft Landing (8:15 & C (Cylon). Mid-block on C, between 8 and 8:30.)

Description:

Patterns that are good at making copies of themselves are not necessarily good from an ethical point of view. We call Pure Replicators, in the context of brains and minds, those beings that use all of their resources for the purpose of replicating. In other words, beings that replicate without regards for their own psychological wellbeing (if they are conscious) or the wellbeing of others. In as much as we believe that value is presented in the quality of experience, perhaps to be “ethical” is to be stewards and advocates for the wellbeing of as many of the “moments of experience” that exist in reality as one can. We will talk about how an “economy of information about the state-space of consciousness” can be a helpful tool in preventing pure-replicator take-over. Lastly, we will announce the existence of a novel test of consciousness that can be used to identify non-sentient artifacts or robots passing for humans within the crowd.

 

The Banality of Evil

In response to the Quora question “I feel like a lot of evil actions in the world have supporters who justify them (like Nazis). Can you come up with some convincing ways in which some of the most evil actions in the world could be justified?David Pearce writes:


Tout comprendre, c’est tout pardonner.”
(Leo Tolstoy, War and Peace)

Despite everything, I believe that people are really good at heart.
(Anne Frank)

The risk of devising justifications of the worst forms of human behaviour is there are people gullible enough to believe them. It’s not as though anti-Semitism died with the Third Reich. Even offering dispassionate causal explanation can sometimes be harmful. So devil’s advocacy is an intellectual exercise to be used sparingly.

That said, the historical record suggests that human societies don’t collectively set out to do evil. Rather, primitive human emotions get entangled with factually mistaken beliefs and ill-conceived metaphysics with ethically catastrophic consequences. Thus the Nazis seriously believed in the existence of an international Jewish conspiracy against the noble Aryan race. Hitler, so shrewd in many respects, credulously swallowed The Protocols of the Elders of Zion. And as his last testament disclosed, obliquely, Hitler believed that the gas chambers were a “more humane means” than the terrible fate befalling the German Volk. Many Nazis (HimmlerHössStangl, and maybe even Eichmann) believed that they were acting from a sense of duty – a great burden stoically borne. And such lessons can be generalised across history. If you believed, like the Inquisition, that torturing heretics was the only way to save their souls from eternal damnation in Hell, would you have the moral courage to do likewise? If you believed that the world would be destroyed by the gods unless you practised mass human sacrifice, would you participate? [No, in my case, albeit for unorthodox reasons.]

In a secular context today, there exist upstanding citizens who would like future civilisation to run “ancestor simulations”. Ancestor simulations would create inconceivably more suffering than any crime perpetrated by the worst sadist or deluded ideologue in history – at least if the computational-functional theory of consciousness assumed by their proponents is correct. If I were to pitch a message to life-lovers aimed at justifying such a monstrous project, as you request, then I guess I’d spin some yarn about how marvellous it would be to recreate past wonders and see grandpa again.
And so forth.

What about the actions of individuals, as distinct from whole societies? Not all depraved human behaviour stems from false metaphysics or confused ideology. The grosser forms of human unpleasantness often stem just from our unreflectively acting out baser appetites (cfHamiltonian spite). Consider the neuroscience of perception. Sentient beings don’t collectively perceive a shared public world. Each of us runs an egocentric world-simulation populated by zombies (sic). We each inhabit warped virtual worlds centered on a different body-image, situated within a vast reality whose existence can be theoretically inferred. Or so science says. Most people are still perceptual naïve realists. They aren’t metaphysicians, or moral philosophers, or students of the neuroscience of perception. Understandably, most people trust the evidence of their own eyes and the wisdom of their innermost feelings, over abstract theory. What “feels right” is shaped by natural selection. And what “feels right” within one’s egocentric virtual world is often callous and sometimes atrocious. Natural selection is amoral. We are all slaves to the pleasure-pain axis, however heavy the layers of disguise. Thanks to evolution, our emotions are “encephalised” in grotesque ways. Even the most ghastly behaviour can be made to seem natural –like Darwinian life itself.

Are there some forms of human behaviour so appalling that I’d find it hard to play devil’s advocate in their mitigation – even as an intellectual exercise?

Well, perhaps consider, say, the most reviled hate-figures in our society – even more reviled than murderers or terrorists. Most sexually active paedophiles don’t set out to harm children: quite the opposite, harm is typically just the tragic by-product of a sexual orientation they didn’t choose. Posthumans may reckon that all Darwinian relationships are toxic. Of course, not all monstrous human behavior stems from wellsprings as deep as sexual orientation. Thus humans aren’t obligate carnivores. Most (though not all) contemporary meat eaters, if pressed, will acknowledge in the abstract that a pig is as sentient and sapient as a prelinguistic human toddler. And no contemporary meat eaters seriously believe that their victims have committed a crime (cfAnimal trial – Wikipedia). Yet if questioned why they cause such terrible suffering to the innocent, and why they pay for a hamburger rather than a veggieburger, a meat eater will come up with perhaps the lamest justification for human depravity ever invented:

“But I like the taste!”

Such is the banality of evil.

Person-moment affecting views

by Katja Grace (source)

[Epistemic status: sloppy thoughts not informed by the literature. Hoping actual population ethicists might show up and correct me or point me to whoever has already thought about something like this better.]

Person-affecting views say that when you are summing up the value in different possible worlds, you should ignore people who only exist in one of those worlds. This is based on something like the following intuitions:

  1. World A can only be better than world B insofar as it is better for someone.
  2. World A can’t be better than world B for Alice, if Alice exists in world A but not world B.

The further-fact view says that after learning all physical facts about Alice and Alice’—such as whether Alice’ was the physical result of Alice waiting for five seconds, or is a brain upload of Alice, or is what came out of a replicating machine on Mars after Alice walked in on Earth, or remembers being Alice—there is still a further meaningful question of whether Alice and Alice’ are the same person.

I take the further-fact view to be wrong (or at least Derek Parfit does, and I think we agree the differences between Derek Parfit and I have been overstated). Thinking that the further-fact view is wrong seems to be a common position among intellectuals (e.g. 87% among philosophers).

If the further-fact view is wrong, then the what we have is a whole lot of different person-moments, with various relationships to one another, which for pragmatic reasons we like to group into clusters called ‘people’. There are different ways we could define the people, and no real answer to which definition is right. This works out pretty well in our world, but you can imagine other worlds (or futures of our world) where the clusters are much more ambiguous, and different definitions of ‘person’ make a big difference, or where the concept is not actually useful.

Person-affecting views seem to make pretty central use of the concept ‘person’. If we don’t accept the further-fact view, and do want to accept a person-affecting view, what would that mean? I can think of several options:

  1. How good different worlds are depends strongly on which definition of ‘person’ you choose (which person moments you choose to cluster together), but this is a somewhat arbitrary pragmatic choice
  2. There is some correct definition of ‘person’ for the purpose of ethics (i.e. there is some relation between person moments that makes different person moments in the future ethically relevant by virtue of having that connection to a present person moment)
  3. Different person-moments are more or less closely connected in ways, and a person-affecting view should actually have a sliding scale of importance for different person-moments

Before considering these options, I want to revisit the second reason for adopting a person-affecting view: If Alice exists in world A and not in world B, then Alice can’t be made better off by world A existing rather than world B. Whether this premise is true seems to depend on how ‘a world being better for Alice’ works. Some things we might measure would go one way, and some would go the other. For instance, we could imagine it being analogous to:

  1. Alice painting more paintings. If Alice painted three paintings in world A, and doesn’t exist in world B, I think most people would say that Alice painted more paintings in world A than in world B. And more clearly, that world A has more paintings than world B, even if we insist that a world can’t have more paintings without somebody in particular having painted more paintings. Relatedly, there are many things people do where the sentence ‘If Alice didn’t exist, she wouldn’t have X’.
  2. Alice having painted more paintings per year. If Alice painted one painting every thirty years in world A, and didn’t exist in world B, in world B the number of paintings per year is undefined, and so incomparable to ‘one per thirty years’.

Suppose that person-affecting view advocates are right, and the worth of one’s life is more like 2). You just can’t compare the worth of Alice’s life in two worlds where she only exists in one of them. Then can you compare person-moments? What if the same ‘person’ exists in two possible worlds, but consists of different person-moments?

Compare world A and world C, which both contain Alice, but in world C Alice makes different choices as a teenager, and becomes a fighter pilot instead of a computer scientist. It turns out that she is not well suited to it, and finds piloting pretty unsatisfying. If Alice_t1A is different from Alice_t1C, can we say that world A is better than world C, in virtue of Alice’s experiences? Each relevant person-moment only exists in one of the worlds, so how can they benefit?

I see several possible responses:

  1. No we can’t. We should have person-moment affecting views.
  2. Things can’t be better or worse for person-moments, only for entire people, holistically across their lives, so the question is meaningless. (Or relatedly, how good a thing is for a person is not a function of how good it is for their person-moments, and it is how good it is for the person that matters).
  3. Yes, there is some difference between people and person moments, which means that person-moments can benefit without existing in worlds that they are benefitting relative to, but people cannot.

The second possibility seems to involve accepting the second view above: that there is some correct definition of ‘person’ that is larger than a person moment, and fundamental to ethics – something like the further-fact view. This sounds kind of bad to me. And the third view doesn’t seem very tempting without some idea of an actual difference between persons and person-moments.

So maybe the person-moment affecting view looks most promising. Let us review what it would have to look like. For one thing, the only comparable person moments are the ones that are the same. And since they are the same, there is no point bringing about one instead of the other. So there is never reason to bring about a person-moment for its own benefit. Which sounds like it might really limit the things that are worth intentionally doing. Isn’t making myself happy in three seconds just bringing about a happy person moment rather than a different sad person moment?

Is everything just equally good on this view? I don’t think so, as long as you are something like a preference utilitarian: person-moments can have preferences over other person-moments. Suppose that Alice_t0A and Alice_t0C are the same, and Alice_t1A and Alice_t1C are different. And suppose that Alice_t0 wants Alice_t1 to be a computer scientist. Then world A is better than world C for Alice_t0, and so better overall. That is, person-moments can benefit from things, as long as they don’t know at the time that they have benefited.

I think an interesting  feature of this view is that all value seems to come from meddling preferences. It is never directly good that there is joy in the world for instance, it is just good because somebody wants somebody else to experience joy, and that desire was satisfied. If they had instead wished for a future person-moment to be tortured, and this was granted, then this world would apparently be just as good.

So, things that are never directly valuable in this world:

  • Joy
  • Someone getting what they want and also knowing about it
  • Anything that isn’t a meddling preference

On the upside, since person-moments often care about future person-moments within the same person, we do perhaps get back to something closer to the original person-affecting view. There is often reason to bring about or benefit a person moment for the benefit of previous person moments in the history of the same person, who for instance wants to ‘live a long and happy life’. My guess after thinking about this very briefly is that in practice it would end up looking like the ‘moderate’ person-affecting views, in which people who currently exist get more weight than people who will be brought into existence, but not infinitely more weight. People who exist now mostly want to continue existing, and to have good lives in the future, and they care less, but some, about different people in the future.

So, if you want to accept a person-affecting view and not a further-fact view, the options seem to me to be something like these:

  1. Person-moments can benefit without having an otherworldly counterpart, even though people cannot. Which is to say, only person-moments that are part of the same ‘person’ in different worlds can benefit from their existence. ‘Person’ here is either an arbitrary pragmatic definition choice, or some more fundamental ethically relevant version of the concept that we could perhaps discover.
  2. Benefits accrue to persons, not person-moments. In particular, benefits to persons are not a function of the benefits to their constituent person-moments. Where ‘person’ is again either a somewhat arbitrary choice of definition, or a more fundamental concept.
  3. A sliding scale of ethical relevance of different person-moments, based on how narrow a definition of ‘person’ unites them with any currently existing person-moments. Along with some story about why, given that you can apparently compare all of them, you are still weighting some less, on grounds that they are incomparable.
  4. Person-moment affecting views

None of these sound very good to me, but nor do person-affecting views in general, so maybe I’m the wrong audience. I had thought person-moment affecting views were almost a reductio, but a close friend says he thought they were the obvious reasonable view, so I am curious to hear others’ takes.



An interesting thing to point out here is that what Katja describes as the further-fact view is terminologically equivalent to what we here call Closed Individualism (cf. Ontological Qualia). This is the common-sense view that you start existing when you are born and stop existing when you die (which also has soul-based variants with possible pre-birth and post-death existence). This view is not very philosophically tenable because it presupposes that there is an enduring metaphysical ego distinct for every person. And yet, the vast majority of people still hold strongly to Closed Individualism. In some sense, in the article Katja tries to rescue the common-sense aspect of Closed Individualism in the context of ethics. That is, by trying to steel-man the common-sense notion that people (rather than moments of experience) are the relevant units for morality while also negating further-fact views, you provide reasons to keep using Closed Individualism as an intuition-pump in ethics (if only for pragmatic reasons). In general, I consider this kind of discussions to be a very fruitful endeavor as they approach ethics by touching upon the key parameters that matter fundamentally: identity, value, and counterfactuals.

As you may gather from pieces such as Wireheading Done Right and The Universal Plot, at Qualia Computing we tend to think the most coherent ethical system arises when we take as a premise that the relevant moral agents are “moments of experience”. Contra Person-affecting views, we don’t think it is meaningless to say that a given world is better than another one if not everyone in the first world is also in the second one. On the contrary – it really does not matter who lives in a given world. What matters is the raw subjective quality of the experiences in such worlds. If it is meaningless to ask “who is experiencing Alice’s experiences now?” once you know all the physical facts, then moral weight must be encoded in such physical facts alone. In turn, it could certainly happen then that the narrative aspect of an experience may turn out to be irrelevant for determining the intrinsic value of a given experience. People’s self-narratives may certainly have important instrumental uses, but at their core they don’t make it to the list of things that intrinsically matter (unlike, say, avoiding suffering).

A helpful philosophical move that we have found adds a lot of clarity here is to analyze the problem in terms of Open Individualism. That is, assume that we are all one consciousness and take it from there. If so, then the probability that you are a given person would be weighted by the amount of consciousness (or number of moments of experience, depending) that such person experiences throughout his or her life. You are everyone in this view, but you can only be each person one at a time from their own limited points of view. So there is a sensible way of weighting the importance of each person, and this is a function of the amount of time you spend being him or her (and normalize by the amount of consciousness that person experiences, in case that is variable across individuals).

If consciousness emerges victorious in its war against pure replicators, then it would make sense that the main theory of identity people would hold by default would be Open Individualism. After all, it is only Open Individualism that aligns individual incentives and the total wellbeing of all moments of experience throughout the universe.

That said, in principle, it could turn out that Open Individualism is not needed to maximize conscious value – that while it may be useful instrumentally to align the existing living intelligences towards a common consciousness-centric goal (e.g. eliminating suffering, building a harmonic society, etc.), in the long run we may find that ontological qualia (the aspect of our experience that we use to represent the nature of reality, including our beliefs about personal identity) has no intrinsic value. Why bother experiencing heaven in the form of a mixture of 95% bliss and 5% ‘a sense of knowing that we are all one’, if you can instead just experience 100% pure bliss?

At the ethical limit, anything that is not perfectly blissful might end up being thought of as a distraction from the cosmic telos of universal wellbeing.

The Universal Plot: Part I – Consciousness vs. Pure Replicators

“It seems plain and self-evident, yet it needs to be said: the isolated knowledge obtained by a group of specialists in a narrow field has in itself no value whatsoever, but only in its synthesis with all the rest of knowledge and only inasmuch as it really contributes in this synthesis toward answering the demand, ‘Who are we?'”

– Erwin Schrödinger in Science and Humanism (1951)

 

“Should you or not commit suicide? This is a good question. Why go on? And you only go on if the game is worth the candle. Now, the universe is been going on for an incredibly long time. Really, a satisfying theory of the universe should be one that’s worth betting on. That seems to me to be absolutely elementary common sense. If you make a theory of the universe which isn’t worth betting on… why bother? Just commit suicide. But if you want to go on playing the game, you’ve got to have an optimal theory for playing the game. Otherwise there’s no point in it.”

Alan Watts, talking about Camu’s claim that suicide is the most important question (cf. The Most Important Philosophical Question)

In this article we provide a novel framework for ethics which focuses on the perennial battle between wellbeing-oriented consciousness-centric values and valueless patterns who happen to be great at making copies of themselves (aka. Consciousness vs. Pure Replicators). This framework extends and generalizes modern accounts of ethics and intuitive wisdom, making intelligible numerous paradigms that previously lived in entirely different worlds (e.g. incongruous aesthetics and cultures). We place this worldview within a novel scale of ethical development with the following levels: (a) The Battle Between Good and Evil, (b) The Balance Between Good and Evil, (c) Gradients of Wisdom, and finally, the view that we advocate: (d) Consciousness vs. Pure Replicators. More so, we analyze each of these worldviews in light of our philosophical background assumptions and posit that (a), (b), and (c) are, at least in spirit, approximations to (d), except that they are less lucid, more confused, and liable to exploitation by pure replicators. Finally, we provide a mathematical formalization of the problem at hand, and discuss the ways in which different theories of consciousness may affect our calculations. We conclude with a few ideas for how to avoid particularly negative scenarios.

Introduction

Throughout human history, the big picture account of the nature, purpose, and limits of reality has evolved dramatically. All religions, ideologies, scientific paradigms, and even aesthetics have background philosophical assumptions that inform their worldviews. One’s answers to the questions “what exists?” and “what is good?” determine the way in which one evaluates the merit of beings, ideas, states of mind, algorithms, and abstract patterns.

Kuhn’s claim that different scientific paradigms are mutually unintelligible (e.g. consciousness realism vs. reductive eliminativism) can be extended to worldviews in a more general sense. It is unlikely that we’ll be able to convey the Consciousness vs. Pure Replicators paradigm by justifying each of the assumptions used to arrive to it one by one starting from current ways of thinking about reality. This is because these background assumptions support each other and are, individually, not derivable from current worldviews. They need to appear together as a unit to hang together tight. Hence, we now make the jump and show you, without further due, all of the background assumptions we need:

  1. Consciousness Realism
  2. Qualia Formalism
  3. Valence Structuralism
  4. The Pleasure Principle (and its corollary The Tyranny of the Intentional Object)
  5. Physicalism (in the causal sense)
  6. Open Individualism (also compatible with Empty Individualism)
  7. Universal Darwinism

These assumptions have been discussed in previous articles. In the meantime, here is a brief description: (1) is the claim that consciousness is an element of reality rather than simply the improper reification of illusory phenomena, such that your conscious experience right now is as much a factual and determinate aspect of reality as, say, the rest mass of an electron. In turn, (2) qualia formalism is the notion that consciousness is in principle quantifiable. Assumption (3) states that valence (i.e. the pleasure/pain axis, how good an experience feels) depends of the structure of such experience (more formally, on the properties of the mathematical object isomorphic to its phenomenology).

(4) is the assumption that people’s behavior is motivated by the pleasure-pain axis even when they think that’s not the case. For instance, people may explicitly represent the reason for doing things in terms of concrete facts about the circumstance, and the pleasure principle does not deny that such reasons are important. Rather, it merely says that such reasons are motivating because one expects/anticipates less negative valence or more positive valence. The Tyranny of the Intentional Object describes the fact that we attribute changes in our valence to external events and objects, and believe that such events and objects are intrinsically good (e.g. we think “icecream is great” rather than “I feel good when I eat icecream”).

Physicalism (5) in this context refers to the notion that the equations of physics fully describe the causal behavior of reality. In other words, the universe behaves according to physical laws and even consciousness has to abide by this fact.

Open Individualism (6) is the claim that we are all one consciousness, in some sense. Even though it sounds crazy at first, there are rigorous philosophical arguments in favor of this view. Whether this is true or not is, for the purpose of this article, less relevant than the fact that we can experience it as true, which happens to have both practical and ethical implications for how society might evolve.

Finally, (7) Universal Darwinism refers to the claim that natural selection works at every level of organization. The explanatory power of evolution and fitness landscapes generated by selection pressures is not confined to the realm of biology. Rather, it is applicable all the way from the quantum foam to, possibly, an ecosystem of universes.

The power of a given worldview is not only its capacity to explain our observations about the inanimate world and the quality of our experience, but also in its capacity to explain *in its own terms* the reasons for why other worldviews are popular as well. In what follows we will utilize these background assumptions to evaluate other worldviews.

 

The Four Worldviews About Ethics

The following four stages describe a plausible progression of thoughts about ethics and the question “what is valuable?” as one learns more about the universe and philosophy. Despite the similarity of the first three levels to the levels of other scales of moral development (e.g. this, this, this, etc.), we believe that the fourth level is novel, understudied, and very, very important.

1. The “Battle Between Good and Evil” Worldview

“Every distinction wants to become the distinction between good and evil.” – Michael Vassar (source)

Common-sensical notions of essential good and evil are pre-scientific. For reasons too complicated to elaborate on for the time being, the human mind is capable of evoking an agentive sense of ultimate goodness (and of ultimate evil).

maxresdefault

Good vs. Evil? God vs. the Devil?

Children are often taught that there are good people and bad people. That evil beings exist objectively, and that it is righteous to punish them and see them with scorn. On this level people reify anti-social behaviors as sins.

Essentializing good and evil, and tying it up to entities seems to be an early developmental stage of people’s conception of ethics, and many people end up perpetually stuck in here. Several religions (specially the Abrahamic ones) are often practiced in such a way so as to reinforce this worldview. That said, many ideologies take advantage of the fact that a large part of the population is at this level to recruit adherents by redefining “what good and bad is” according to the needs of such ideologies. As a psychological attitude (rather than as a theory of the universe), reactionary and fanatical social movements often rely implicitly on this way of seeing the world, where there are bad people (jews, traitors, infidels, over-eaters, etc.) who are seen as corrupting the soul of society and who deserve to have their fundamental badness exposed and exorcised with punishment in front of everyone else.

15d8a1999197da27374e911b1cba769a--satan-god-is-good

Traditional notions of God vs. the Devil can be interpreted as the personification of positive and negative valence

Implicitly, this view tends to gain psychological strength from the background assumptions of Closed Individualism (which allows you to imagine that people can be essentially bad). Likewise, this view tends to be naïve about the importance of valence in ethics. Good feelings are often interpreted as the result of being aligned with fundamental goodness, rather than as positive states of consciousness that happen to be triggered by a mix of innate and programmable things (including cultural identifications). More so, good feelings that don’t come in response to the preconceived universal order are seen as demonic and aberrant.

From our point of view (the 7 background assumptions above) we interpret this particular worldview as something that we might be biologically predisposed to buy into. Believing in the battle between good and evil was probably evolutionarily adaptive in our ancestral environment, and might reduce many frictional costs that arise from having a more subtle view of reality (e.g. “The cheaper people are to model, the larger the groups that can be modeled well enough to cooperate with them.” – Michale Vassar). Thus, there are often pragmatic reasons to adopt this view, specially when the social environment does not have enough resources to sustain a more sophisticated worldview. Additionally, at an individual level, creating strong boundaries around what is or not permissible can be helpful when one has low levels of impulse control (though it may come at the cost of reduced creativity).

On this level, explicit wireheading (whether done right or not) is perceived as either sinful (defying God’s punishment) or as a sort of treason (disengaging from the world). Whether one feels good or not should be left to the whims of the higher order. On the flipside, based on the pleasure principle it is possible to interpret the desire to be righteous as being motivated by high valence states, and reinforced by social approval, all the while the tyranny of the intentional object cloaks this dynamic.

It’s worth noting that cultural conservativism, low levels of the psychological constructs of Openness to Experience and Tolerance of Ambiguity , and high levels of Need for Closure, all predict getting stuck in this worldview for one’s entire life.

2. The “Balance Between Good and Evil” Worldview

TVTropes has a great summary of the sorts of narratives that express this particular worldview and I highly recommend reading that article to gain insight into the moral attitudes compatible with this view. For example, here are some reasons why Good cannot or should not win:

Good winning includes: the universe becoming boring, society stagnating or collapsing from within in the absence of something to struggle against or giving people a chance to show real nobility and virtue by risking their lives to defend each other. Other times, it’s enforced by depicting ultimate good as repressive (often Lawful Stupid), or by declaring concepts such as free will or ambition as evil. In other words “too much of a good thing”.

Balance Between Good and Evil by tvtropes

Now, the stated reasons why people might buy into this view are rarely their true reasons. Deep down, the Balance Between Good and Evil is adopted because: people want to differentiate themselves from those who believe in (1) to signal intellectual sophistication, they experience learned helplessness after trying to defeat evil without success (often in the form of resilient personal failings or societal flaws), they find the view compelling at an intuitive emotional level (i.e. they have internalized the hedonic treadmill and project it onto the rest of reality).

In all of these cases, though, there is something somewhat paradoxical about holding this view. And that is that people report that coming to terms with the fact that not everything can be good is itself a cause of relief, self-acceptance, and happiness. In other words, holding this belief is often mood-enhancing. One can also confirm the fact that this view is emotionally load-bearing by observing the psychological reaction that such people have to, for example, bringing up the Hedonistic Imperative (which asserts that eliminating suffering without sacrificing anything of value is scientifically possible), indefinite life extension, or the prospect of super-intelligence. Rarely are people at this level intellectually curious about these ideas, and they come up with excuses to avoid looking at the evidence, however compelling it may be.

For example, some people are lucky enough to be born with a predisposition to being hyperthymic (which, contrary to preconceptions, does the opposite of making you a couch potato). People’s hedonic set-point is at least partly genetically determined, and simply avoiding some variants of the SCN9A gene with preimplantation genetic diagnosis would greatly reduce the number of people who needlessly suffer from chronic pain.

But this is not seen with curious eyes by people who hold this or the previous worldview. Why? Partly this is because it would be painful to admit that both oneself and others are stuck in a local maxima of wellbeing and that examining alternatives might yield very positive outcomes (i.e. omission bias). But at its core, this willful ignorance can be explained as a consequence of the fact that people at this level get a lot of positive valence from interpreting present and past suffering in such a way that it becomes tied to their core identity. Pride in having overcome their past sufferings, and personal attachment to their current struggles and anxieties binds them to this worldview.

If it wasn’t clear from the previous paragraph, this worldview often requires a special sort of chronic lack of self-insight. It ultimately relies on a psychological trick. One never sees people who hold this view voluntarily breaking their legs, taking poison, or burning their assets to increase the goodness elsewhere as an act of altruism. Instead, one uses this worldview as a mood-booster, and in practice, it is also susceptible to the same sort of fanaticism as the first one (although somewhat less so). “There can be no light without the dark. And so it is with magic. Myself, I always try to live within the light.” – Horace Slughorn.

315eab27545ea96c67953c54358fe600Additionally, this view helps people rationalize the negative aspects of one’s community and culture. For example, it not uncommon for people to say that buying factory farmed meat is acceptable on the grounds that “some things have to die/suffer for others to live/enjoy life.” Balance Between Good and Evil is a close friend of status quo bias.

Hinduism, Daoism, and quite a few interpretations of Buddhism work best within this framework. Getting closer to God and ultimate reality is not done by abolishing evil, but by embracing the unity of all and fostering a healthy balance between health and sickness.

It’s also worth noting that the balance between good and evil tends to be recursively applied, so that one is not able to “re-define our utility function from ‘optimizing the good’ to optimizing ‘the balance of good and evil’ with a hard-headed evidence-based consequentialist approach.” Indeed, trying to do this is then perceived as yet another incarnation of good (or evil) which needs to also be balanced with its opposite (willful ignorance and fuzzy thinking). One comes to the conclusion that it is the fuzzy thinking itself that people at this level are after: to blur reality just enough to make it seem good, and to feel like one is not responsible for the suffering in the world (specially by inaction and lack of thinking clearly about how one could help). “Reality is only a Rorschach ink-blot, you know” – Alan Watts. So this becomes a justification for thinking less than one really has to about the suffering in the world. Then again, it’s hard to blame people for trying to keep the collective standards of rigor lax, given the high proportion of fanatics who adhere to the “battle between good and evil” worldview, and who will jump the gun to demonize anyone who is slacking off and not stressed out all the time, constantly worrying about the question “could I do more?”

(Note: if one is actually trying to improve the world as much as possible, being stressed out about it all the time is not the right policy).

3. The “Gradients of Wisdom” Worldview

David Chapman’s HTML book Meaningness might describe both of the previous worldviews as variants of eternalism. In the context of his work, eternalism refers to the notion that there is an absolute order and meaning to existence. When applied to codes of conduct, this turns into “ethical eternalism”, which he defines as: “the stance that there is a fixed ethical code according to which we should live. The eternal ordering principle is usually seen as the source of the code.” Chapman eloquently argues that eternalism has many side effects, including: deliberate stupidity, attachment to abusive dynamics, constant disappointment and self-punishment, and so on. By realizing that, in some sense, no one knows what the hell is going on (and those who do are just pretending) one takes the first step towards the “Gradients of Wisdom” worldview.

At this level people realize that there is no evil essence. Some might talk about this in terms of there “not being good or bad people”, but rather just degrees of impulse control, knowledge about the world, beliefs about reality, emotional stability, and so on. A villain’s soul is not connected to some kind of evil reality. Rather, his or her actions can be explained by the causes and conditions that led to his or her psychological make-up.

Sam Harris’ ideas as expressed in The Moral Landscape evoke this stage very clearly. Sam explains that just as health is a fuzzy but important concept, so is psychological wellbeing, and that for such a reason we can objectively assess cultures as more or less in agreement with human flourishing. the-science-of-morality-7-728

Indeed, many people who are at this level do believe in valence structuralism, where they recognize that there are states of consciousness that are inherently better in some intrinsic subjective value sense than others.

However, there is usually no principled framework to assess whether a certain future is indeed optimal or not. There is little hard-headed discussion of population ethics for fear of sounding unwise or insensitive. And when push comes to shove, they lack good arguments to decisively rule out why particular situations might be bad. In other words, there is room for improvement, and such improvement might eventually come from more rigor and bullet-bitting.  In particular, a more direct examination of the implications of: Open Individualism, the Tyranny of the Intentional Object, and Universal Darwinism can allow someone on this level to make a breakthrough. Here is where we come to:

4. The “Consciousness vs. Pure Replicators” Worldview

In Wireheading Done Right we introduced the concept of a pure replicator:

I will define a pure replicator, in the context of agents and minds, to be an intelligence that is indifferent towards the valence of its conscious states and those of others. A pure replicator invests all of its energy and resources into surviving and reproducing, even at the cost of continuous suffering to themselves or others. Its main evolutionary advantage is that it does not need to spend any resources making the world a better place.

Presumably our genes are pure replicators. But we, as sentient minds who recognize the intrinsic value (both positive and negative) of conscious experiences, are not pure replicators. Thanks to a myriad of fascinating dynamics, it so happened that making minds who love, appreciate, think creatively, and philosophize was a side effect of the process of refining the selfishness of our genes. We must not take for granted that we are more than pure replicators ourselves, and that we care both about our wellbeing and the wellbeing of others. The problem now is that the particular selection pressures that led to this may not be present in the future. After all, digital and genetic technologies are drastically changing the fitness landscape for patterns that are good at making copies of themselves.

In an optimistic scenario, future selection pressures will make us all naturally gravitate towards super-happiness. This is what David Pearce posits in his essay “The Biointelligence Explosion”:

As the reproductive revolution of “designer babies” gathers pace, prospective parents will pre-select alleles and allelic combinations for a new child in anticipation of their behavioural effects – a novel kind of selection pressure to replace the “blind” genetic roulette of natural selection. In time, routine embryo screening via preimplantation genetic diagnosis will be complemented by gene therapy, genetic enhancement and then true designer zygotes. In consequence, life on Earth will also become progressively happier as the hedonic treadmill is recalibrated. In the new reproductive era, hedonic set-points and intelligence alike will be ratcheted upwards in virtue of selection pressure. For what parent-to-be wants to give birth to a low-status depressive “loser”? Future parents can enjoy raising a normal transhuman supergenius who grows up to be faster than Usain Bolt, more beautiful than Marilyn Monroe, more saintly than Nelson Mandela, more creative than Shakespeare – and smarter than Einstein.

– David Pearce in The Biointelligence Explosion

In a pessimistic scenario, the selection pressures lead to the opposite direction, where negative experiences are the only states of consciousness that happen to be evolutionarily adaptive, and so they become universally used.

There is a number of thinkers and groups who can be squarely placed on this level, and relative to the general population, they are extremely rare (see: The Future of Human Evolution,  A Few Dystopic Future Scenarios,  Book Review: Age of EM, Nick Land’s Gnon, Spreading Happiness to the Stars Seems Little Harder than Just Spreading, etc.). See also**. What is much needed now, is formalizing the situation and working out what we could do about it. But first, some thoughts about the current state of affairs.

There is at least some encouraging facts that suggest it is not too late to prevent a pure replicator takeover. There are memes, states of consciousness, and resources that can be used in order to steer evolution in a positive directions. In particular, as of 2017:

  1. A very big proportion of the economy is dedicated to trading positive experiences for money, rather than just survival or power tools. Thus an economy of information about states of consciousness is still feasible.
  2. There is a large fraction of the population who is altruistic and would be willing to cooperate with the rest of the world to avoid catastrophic scenarios.
  3. Happy people are more motivated, productive, engaged, and ultimately, economically useful (see hyperthimic temperament).
  4. Many people have explored Open Individualism and are interested (or at least curious) about the idea that we are all one.
  5. A lot of people are fascinated by psychedelics and the non-ordinary states of consciousness that they induce.
  6. MDMA-like consciousness is both very positive in terms of its valence, but also, amazingly, extremely pro-social, and future sustainable versions of it could be recruited to stabilize societies where the highest value is the collective wellbeing.

It is important to not underestimate the power of the facts laid out above. If we get our act together and create a Manhattan Project of Consciousness we might be able to find sustainable, reliable, and powerful methods that stabilize a hyper-motivated, smart, super-happy and super-prosocial state of consciousness in a large fraction of the population. In the future, we may all by default identify with consciousness itself rather than with our bodies (or our genes), and be intrinsically (and rationally) motivated to collaborate with everyone else to create as much happiness as possible as well as to eradicate suffering with technology. And if we are smart enough, we might also be able to solidify this state of affairs, or at least shield it against pure replicator takeovers.

The beginnings of that kind of society may already be underway. Consider for example the contrast between Burning Man and Las Vegas. Burning Man is a place that works as a playground for exploring post-Darwinean social dynamics, in which people help each other overcome addictions and affirm their commitment to helping all of humanity. Las Vegas, on the other hand, might be described as a place that is filled to the top with pure replicators in the forms of memes, addictions, and denial. The present world has the potential for both kind of environments, and we do not yet know which one will outlive the other in the long run.

Formalizing the Problem

We want to specify the problem in a way that will make it mathematically intelligible. In brief, in this section we focus on specifying what it means to be a pure replicator in formal terms. Per the definition, we know that pure replicators will use resources as efficiently as possible to make copies of themselves, and will not care about the negative consequences of their actions. And in the context of using brains, computers, and other systems whose states might have moral significance (i.e. they can suffer), they will simply care about the overall utility of such systems for whatever purpose they may require. Such utility will be a function of both the accuracy with which the system performs it’s task, as well as its overall efficiency in terms of resources like time, space, and energy.

Simply phrased, we want to be able to answer the question: Given a certain set of constraints such as energy, matter, and physical conditions (temperature, radiation, etc.), what is the amount of pleasure and pain involved in the most efficient implementation of a given predefined input-output mapping?

system_specifications

The image above represents the relevant components of a system that might be used for some purpose by an intelligence. We have the inputs, the outputs, the constraints (such as temperature, materials, etc.) and the efficiency metrics. Let’s unpack this. In the general case, an intelligence will try to find a system with the appropriate trade-off between efficiency and accuracy. We can wrap up this as an “efficiency metric function”, e(o|i, s, c) which encodes the following meaning: “e(o|i, s, c) = the efficiency with which a given output is generated given the input, the system being used, and the physical constraints in place.”

basic_system

Now, we introduce the notion of the “valence for the system given a particular input” (i.e. the valence for the system’s state in response to such an input). Let’s call this v(s|i). It is worth pointing out that whether valence can be computed, and whether it is even a meaningfully objective property of a system is highly controversial (e.g. “Measuring Happiness and Suffering“). Our particular take (at QRI) is that valence is a mathematical property that can be decoded from the mathematical object whose properties are isomorphic to a system’s phenomenology (see: Principia Qualia: Part II – Valence, and also Quantifying Bliss). If so, then there is a matter of fact about just how good/bad an experience is. For the time being we will assume that valence is indeed quantifiable, given that we are working under the premise of valence structuralism (as stated in our list of assumptions). We thus define the overall utility for a given output as U(e(o|i, s, c), v(s|i)), where the valence of the system may or may not be taken into account. In turn, an intelligence is said to be altruistic if it cares about the valence of the system in addition to its efficiency, so that it’s utility function penalizes negative valence (and rewards positive valence).

valence_altruism

Now, the intelligence (altruistic or not) utilizing the system will also have to take into account the overall range of inputs the system will be used to process in order to determine how valuable the system is overall. For this reason, we define the expected value of the system as the utility of each input multiplied by its probability.

input_probabilities

(Note: a more complete formalization would also weight in the importance of each input-output transformation, in addition to their frequency). Moving on, we can now define the overall expected utility for the system given the distribution of inputs it’s used for, its valence, its efficiency metrics, and its constraints as E[U(s|v, e, c, P(I))]:

chosen_system

The last equation shows that the intelligence would choose the system that maximizes E[U(s|v, e, c, P(I))].

Pure replicators will be better at surviving as long as the chances of reproducing do not depend on their altruism. If altruism does not reduce such reproductive fitness, then:

Given two intelligences that are competing for existence and/or resources to make copies of themselves and fight against other intelligences, there is going to be a strong incentive to choose a system that maximizes the efficiency metrics regardless of the valence of the system.

In the long run, then, we’d expect to see only non-altruistic intelligences (i.e. intelligences with utility functions that are indifferent to the valence of the systems it uses to process information). In other words, as evolution pushes intelligences to optimize the efficiency metrics of the systems they employ, it also pushes them to stop caring about the wellbeing of such systems. In other words, evolution pushes intelligences to become pure replicators in the long run.

Hence we should ask: How can altruism increase the chances of reproduction? A possibility would be for the environment to reward entities that are altruistic. Unfortunately, in the long run we might see that environments that reward altruistic entities produce less efficient entities than environments that don’t. If there are two very similar environments, one which rewards altruism and one which doesn’t, the efficiency of the entities in the latter might become so much higher than in the former that they become able to takeover and destroy whatever mechanism is implementing such reward for altruism in the former. Thus, we suggest to find environments in which rewarding altruism is baked into their very nature, such that similar environments without such reward either don’t exist or are too unstable to exist for the amount of time it takes to evolve non-altruistic entities. This and other similar approaches will be explored further in Part II.

Behaviorism, Functionalism, Non-Materialist Physicalism

A key insight is that the formalization presented above is agnostic about one’s theory of consciousness. We are simply assuming that it’s possible to compute the valence of the system in terms of its state. How one goes about computing such valence, though, will depend on how one maps physical systems to experiences. Getting into the weeds of the countless theories of consciousness out there would not be very productive at this stage, but there is still value in defining the rough outline of kinds of theories of consciousness. In particular, we categorize (physicalist) theories of consciousness in terms of the level of abstraction they identify as the place in which to look for consciousness.

Behaviorism and similar accounts simply associate consciousness to input-output mappings, which can be described, in Marr’s terms, as the computational level of abstraction. In this case, v(s|i) would not depend on the details of the system as much as in what it does from a third person point of view. Behaviorists don’t care what’s in the Chinese Room; all they care about is if the Chinese Room can scribble “I’m in pain” as an output. How we can formalize a mathematical equation to infer whether a system is suffering from a behaviorist point of view is beyond me, but maybe someone might want to give it a shot. As a side note, behaviorists historically were not very concerned about pain or pleasure, and there cause to believe that behaviorism itself might be anti-depressant for people for whom introspection results in more pain than pleasure.

Functionalism (along with computational theories of mind) defines consciousness as the sum-total of the functional properties of systems. In turn, this means that consciousness arises at the algorithmic level of abstraction. Contrary to common misconception, functionalists do care about how the Chinese Room is implemented: contra behaviorists, they do not usually agree that a Chinese Room implemented with a look-up table is conscious.*

As such v(s|i) will depend on the algorithms that the system is implementing. Thus, as an intermediary step, one would need a function that takes the system as an input and returns the algorithms that the system is implementing as an output, A(s). Only once we have A(s) we would then be able to infer the valence of the system. Which algorithms, and for what reason, are in fact hedonically-charged has yet to be clarified. Committed functionalists often associate reinforcement learning with pleasure and pain, and one could imagine that as philosophy of mind gets more rigorous and takes into account more advancements in neuroscience and AI, we will see more hypothesis being made about what kinds of algorithms result in phenomenal pain (and pleasure). There are many (still fuzzy) problems to be solved for this account to work even in principle. Indeed, there is a reason to believe that the question “what algorithms is this system performing?” has no definite answer, and it surely isn’t frame-invariant in the same way that a physical state might be. The fact that algorithms do not carve nature at its joints would imply that consciousness is not really a well-defined element of reality either. But rather than this working as a reductio-ad-absurdum of functionalism, many of its proponents have instead turned around to conclude that consciousness itself is not a natural kind. This does represent an important challenge in order to define the valence of the system, and makes the problem of detecting and avoiding pure replicators extra challenging. Admirably so, this is not stopping some from trying anyway.

We also should note that there are further problems with functionalism in general, including the fact that qualia, the binding problem, and the causal role of consciousness seem underivable from its premises. For a detailed discussion about this, read this article.

Finally, Non-Materialist Physicalism locates consciousness at the implementation level of abstraction. This general account of consciousness refers to the notion that the intrinsic nature of the physical is qualia. There are many related views that for the purpose of this article should be good enough approximations: panpsychism, panexperientialism, neutral monism, Russellian monism, etc. Basically, this view takes seriously both the equations of physics and the idea that what they describe is the behavior of qualia. A big advantage of this view is that there is a matter-of-fact about what a system is composed of. Indeed, both in relativity and quantum mechanics, the underlying nature of a system is frame-invariant, such that its fundamental (intrinsic and causal) properties do not depend on one’s frame of reference. In order to obtain v(s|i) we will need to obtain this frame-invariant description of what the system is in a given state. Thus, we need a function that takes as input physical measurements of the system and returns the best possible approximation to what is actually going on under the hood, Ph(s). And only with this function Ph(s) we would be ready to compute the valence of the system. Now, in practice we might not need a plank-length description of the system, since the mathematical property that describes it’s valence might turn out to be well-approximated with high-level features of it.

The main problem with Non-Materialist Physicalism comes when one considers systems that have similar efficiency metrics, are performing the same algorithms, and look the same in all of the relevant respects from a third-person point, and yet do not have the same experience. In brief: if physical rather than functional aspects of systems map to conscious experiences, it seems likely that we could find two systems that do the same (input-output mapping), do it in the same way (algorithms), and yet one is conscious and the other isn’t.

This kind of scenario is what has pushed many to conclude that functionalism is the only viable alternative, since at this point consciousness would seem epiphenomenal (e.g. Zombies Redacted). And indeed, if this was the case, it would seem to be a mere matter of chance that our brains are implemented with the right stuff to be conscious, since the nature of such stuff is not essential to the algorithms that actually end up processing the information. You cannot speak to stuff, but you can speak to an algorithm. So how do we even know we have the right stuff to be conscious?

The way to respond to this very valid criticism is for Non-Materialist Physicalism to postulate that bound states of consciousness have computational properties. In brief, epiphenomenalism cannot be true. But this does not rule out Non-Materialist Physicalism for the simple reason that the quality of states of consciousness might be involved in processing information. Enter…

The Computational Properties of Consciousness

Let’s leave behaviorism behind for the time being. In what ways do functionalism and non-materialist physicalism differ in the context of information processing? In the former, consciousness is nothing other than certain kinds of information processing, whereas in the latter conscious states can be used for information processing. An example of this falls out of taking David Pearce’s theory of consciousness seriously. In his account, the phenomenal binding problem (i.e. “if we are made of atoms, how come our experience contains many pieces of information at once?”, see: The Combination Problem for Panpsychism) is solved via quantum coherence. Thus, a given moment of consciousness is a definite physical system that works as a unit. Conscious states are ontologically unitary, and not merely functionally unitary.

If this is the case, there would be a good reason for evolution to recruit conscious states to process information. Simply put, given a set of constraints, using quantum coherence might be the most efficient way to solve some computational problems. Thus, evolution might have stumbled upon a computational jackpot by creating neurons whose (extremely) fleeting quantum coherence could be used to solve constraint satisfaction problems in ways that would be more energetically expensive to do otherwise. In turn, over many millions of years, brains got really good at using consciousness in order to efficiently process information. It is thus not an accident that we are conscious, that our conscious experiences are unitary, that our world-simulations use a wide range of qualia varieties, and so on. All of these seemingly random, seemingly epiphenomenal, aspects of our existence happen to be computationally advantageous. Just as using quantum computing for factorizing prime numbers, or for solving problems amenable to annealing might give quantum computers a computational edge over their non-quantum counterparts, so is using bound conscious experiences helpful to outcompete non-sentient animals.

Of course, there is yet no evidence of macroscopic decoherence and the brain is too hot anyway, so on the face of it Pearce’s theory seems exceedingly unlikely. But its explanatory power should not be dismissed out of hand, and the fact that it makes empirically testable predictions is noteworthy (how often do consciousness theorists make precise predictions to falsify their theories?).

Whether it is via quantum coherence, entanglement, invariants of the gauge field, or any other deep physical property of reality, non-materialist physicalism can avert the spectre of epiphenomenalism by postulating that the relevant properties of matter that make us conscious are precisely those that give our brains a computational edge (relative to what evolution was able to find in the vicinity of the fitness landscape explored in our history).

Will Pure Replicators Use Valence Gradients at All?

Whether we work under the assumption of functionalism or non-materialist physicalism, we already know that our genes found happiness and suffering to be evolutionary advantageous. So we know that there is at least a set of constraints, efficiency metrics, and input-output mappings that make both phenomenal pleasure and pain very good algorithms (functionalism) or physical implementations (non-materialist physicalism). But will the parameters necessitated by replicators in the long-term future have these properties? Remember that evolution was only able to explore a restricted state-space of possible brain implementations delimited by the pre-existing gene pool (and the behavioral requirements provided by the environment). So, in one extreme case, it may be the case that a fully optimized brain simply does not need consciousness to solve problems. And in another extreme, it may turn out that consciousness is extraordinarily more powerful when used in an optimal way. Would this be good or bad?

What’s the best case scenario? Well, the absolute best possible case is a case so optimistic and incredibly lucky that if it turned out to be true, it would probably make me believe in a benevolent God (or Simulation). This is the case where it turns out that only positive valence gradients are computationally superior to every other alternative given a set of constraints, input-output mappings, and arbitrary efficiency functions. In this case, the most powerful pure replicators, despite their lack of altruism, will nonetheless be pumping out massive amounts of systems that produce unspeakable levels of bliss. It’s as if the very nature of this universe is blissful… we simply happen to suffer because we are stuck in a tiny wrinkle at the foothills of the optimization process of evolution.

In the extreme opposite case, it turns out that only negative valence gradients offer strict computational benefits under heavy optimization. This would be Hell. Or at least, it would tend towards Hell in the long run. If this happens to be the universe we live in, let’s all agree to either conspire to prevent evolution from moving on, or figure out the way to turn it off. In the long term, we’d expect every being alive (or AI, upload, etc.) to be a zombie or a piece of dolorium. Not a fun idea.

In practice, it’s much more likely that both positive and negative valence gradients will be of some use in some contexts. Figuring out exactly which contexts these are might be both extremely important, and also extremely dangerous. In particular, finding out in advance which computational tasks make positive valence gradients a superior alternative to other methods of doing the relevant computations would inform us about the sorts of cultures, societies, religions, and technologies that we should be promoting in order to give this a push in the right direction (and hopefully out-run the environments that would make negative valence gradients adaptive).

Unless we create a Singleton early on, it’s likely that by default all future entities in the long-term future will be non-altruistic pure replicators. But it is also possible that there are multiple attractors (i.e. evolutionarily stable ecosystems) in which different computational properties of consciousness are adaptive. Thus the case for pushing our evolutionary history in the right direction right now before we give up.

 Coming Next: The Hierarchy of Cooperators

Now that we covered the four worldviews, formalized what it means to be a pure replicator, and analyzed the possible future outcomes based on the computational properties of consciousness (and of valence gradients in particular), we are ready to face the game of reality in its own terms.

Team Consciousness, we need to to get our act together. We need a systematic worldview, availability of states of consciousness, set of beliefs and practices to help us prevent pure replicator takeovers.

But we cannot do this as long as we are in the dark about the sorts of entities, both consciousness-focused and pure replicators, who are likely to arise in the future in response to the selection pressures that cultural and technological change are likely to produce. In Part II of The Universal Plot we will address this and more. Stay tuned…

 



* Rather, they usually claim that, given that a Chinese Room is implemented with physical material from this universe and subject to the typical constraints of this world, it is extremely unlikely that a universe-sized look-up table would be producing the output. Hence, the algorithms that are producing the output are probably highly complex and using information processing with human-like linguistic representations, which means that, by all means, the Chinese Room it very likely understanding what it is outputting.


** Related Work:

Here is a list of literature that points in the direction of Consciousness vs. Pure Replicators. There are countless more worthwhile references, but I think that these ones are about the best:

The Biointelligence Explosion (David Pearce), Meditations on Moloch (Scott Alexander), What is a Singleton? (Nick Bostrom), Coherent Extrapolated Volition (Eliezer Yudkowsky), Simulations of God (John Lilly), Meaningness (David Chapman), The Selfish Gene (Richard Dawkins), Darwin’s Dangerous Idea (Daniel Dennett), Prometheus Rising (R. A. Wilson).

Additionally, here are some further references that address important aspects of this worlview, although they are not explicitly trying to arrive at a big picture view of the whole thing:

Neurons Gone Wild (Kevin Simler), The Age of EM (Robin Hanson), The Mating Mind (Geoffrey Miller), Joyous Cosmology (Alan Watts), The Ego Tunnel (Thomas Metzinger), The Orthogonality Thesis (Stuart Armstrong)

 

Traps of the God Realm

From Opening the Heart of Compassion by Martin Lowenthal and Lar Short (pages 132-136).

Seeking Oneness

In this realm we want to be “one with the universe.” We are trying to return to a time when we felt no separation, when the world of our experience seemed to be the only world. We want to recover the experience and comfort of the womb. In the universe of the womb, everything was ours without qualification and was designed to support our existence and growth. Now we want the cosmos to be our womb, as if it were designed specifically for our benefit.

We want satisfaction to flow more easily, naturally and automatically. This seems less likely when we are enmeshed in the everyday affairs of the world. Therefore, we withdraw to the familiar world of what is ours, of what we can control, and of our domain of influence. We may even withdraw to a domain in the mind. Everything seems to come so much easier in the realm of thought, once we have achieved some modest control over our minds. Insulating ourselves from the troubles of others and of life, we get further seduced by the seeming limitlessness of this mental world. 

In this process of trance formation, we try to make every sound musical, every image a work of art, and every feeling pleasant. Blocking out all sources of irritation, we retreat to a self-proclaimed “higher” plane of being. We cultivate the “higher qualities of life,” not settling for a “mundane” life.

Masquerade of Higher Consciousness

The danger for those of us on a spiritual path is that the practices and the teachings can be enlisted to serve the realm rather than to dissolve our fixations and open us to truth. We discover that we can go beyond sensual pleasure and material beauty to refined states of consciousness. We achieve purely mental pleasures of increasing subtlety and learn how to maintain them for extended periods. We think we can maintain our new vanity and even expand it to include the entire cosmos, thus vanquishing change, old age, and death. Chogyam Trungpa Rinpoche called this process “spiritual materialism.”

For example, we use a sense of spaciousness to expand our consciousness by imposing our preconception of limitlessness on the cosmos. We see everything that we have created and “it is good.” Our vanity in the god realm elevates our self-image to the level of the divine–we feel capable of comprehending the universe and the nature of reality.

We move beyond our contemplation of limitless space, expanding our consciousness to include the very forces that create vast space. As the creator of vast space, we imagine that we have no boundaries, no limits, and no position. Our mind can now include everything. We find that we do not have concepts for such images and possibilities, so we think that the Divine or Essence must be not any particular thing we can conceive of, must be empty of conceptual characteristics.

Thus our vain consciousness, as the Divine, conceives that it has no particular location, is not anything in particular, and is itself beyond imagination. We arrive at the conclusion that even this attempt to comprehend emptiness is itself a concept, and that emptiness is devoid of inherent meaning. We shift our attention to the idea of being not not any particular thing. We then come to the glorious position that nothing can be truly stated, that nothing has inherent value. This mental understanding becomes our ultimate vanity. We take pride in it, identify as someone who “knows”, and adopt a posture in the world as someone who has journeyed into the ultimate nature of the unknown.

In this way we create more and more chains that bind us and limit our growth as we move ever inward. When we think we are becoming one with the universe, we are only achieving greater oneness with our own self-image. Instead of illuminating our ignorance, we expand its domain. We become ever more disconnected from others, from communication and true sharing, and from compassion. We subtly bind ourselves ever more tightly, even to the point of suffocation, under the guise of freedom in spaciousness.

Spiritual Masquerades of Teachers and Devoted Students

As we acquire some understanding and feel expansive, we may think we are God’s special gift to humanity, here to teach the truth. Although we may not acknowledge that we have something to prove, at some level we are trying to prove how supremely unique and important we are. Our spiritual life-style is our expression of that uniqueness and significance.

Spiritual teachers run a great danger of falling into the traps of the god realm. If a teacher has charisma and the ability to channel and radiate intense energy, this power may be misused to engender hope in students and to bind them in a dependent relationship. The true teacher undermines hope, teaches by the example of wisdom and compassion, and encourages students to be autonomous by investigating truth themselves, checking their own experience, and trusting their own results more than faith.

The teacher is not a god but a bridge to the unknown, a guide to the awareness qualities and energy capacities we want for our spiritual growth. The teacher, who is the same as we are, demonstrates what is possible in terms of aliveness and how to use the path of compassion to become free. In a sense, the teacher touches both aspects of our being: our everyday life of habits and feelings on the one hand and our awakened aliveness and wisdom on the other. While respect for and openness to the teacher are important for our growth and freedom, blind devotion fixates us on the person of the teacher. We then become confined by the limitations of the teacher’s personality rather than liberated by the teachings.

False Transcendence

Many characteristics of this realm–creative imagination, the tendency to go beyond assumed reality and individual perspectives, and the sense of expansiveness–are close to the underlying dynamic of wonderment. In wonder, we find the wisdom qualities of openness, true bliss, the realization of spaciousness within which all things arise, and alignment with universal principles. The god realm attitude results in superficial experiences that fit our preconceptions of realization but that lack the authenticity of wonder and the grounding in compassion and freedom.

Because the realm itself seems to offer transcendence, this is one of the most difficult realms to transcend. The heart posture of the realm propels us to transcend conflict and problems until we are comfortable. The desire for inner comfort, rather than for an authentic openness to the unknown, governs our quest. But many feelings arise during the true process of realization. At certain stages there is pain and disorientation, and at others a kind of bliss that may make us feel like we are going to burst (if there was something or someone to burst). When we settle for comfort we settle for the counterfeit of realization–the relief and pride we feel when we think we understand something.

Because we think that whatever makes us feel good is correct, we ignore disturbing events, information, and people and anything else that does not fit into our view of the world. We elevate ignorance to a form of bliss by excluding from our attention everything that is non-supportive.

Preoccupied with self, with grandiosity, and with the power and radiance of our own being, we resist the mystery of the unknown. When we are threatened by the unknown, we stifle the natural dynamic of wonder that arises in relation to all that is beyond our self-intoxication. We must either include vast space and the unknown within our sense of ourselves or ignore it because we do not want to feel insignificant and small. Our sense of awe before the forces of grace cannot be acknowledged for fear of invalidating our self-image.

Above the Law

According to our self-serving point of view, we are above the laws of nature and of humankind. We think that, as long as what we do seems reasonable to us, it is appropriate. We are accountable to ourselves and not to other people, the environment, or society. Human history is filled with examples of people in politics, business, and religion who demonstrated this attitude and caused enormous suffering.

Unlike the titans who struggle with death, we, as gods, know that death is not really real. We take comfort in the thought that “death is an illusion.” The only people who die are those who are stuck and have not come to the true inner place beyond time, change, and death. We may even believe that we have the potential to develop our bodies and minds to such a degree that we can reverse the aging process and become one of the “immortals.”

A man, walking on a beach, reaches down and picks up a pebble. Looking at the small stone in his hand, he feels very powerful and thinks of how with one stroke he has taken control of the stone. “How many years have you been here, and now I place you in my hand.” The pebble speaks to him, “Though to you, I am only a grain of sand in your hand, you, to me, are but a passing breeze.”

Mental Health as an EA Cause: Key Questions

Michale Johnosn and I will be hanging out at the EA Global (SF) 2017 conference this weekend representing the Qualia Research Institute. If you see us and want to chat, please feel free to approach us. This is what we look like:

13920483_1094117364013753_6812328805047750006_o

At EAGlobal 2016 at Berkeley

I will be handing out the following flyer:


Mental Health as an EA Cause Area: Key Questions

  1. What makes a state of consciousness feel good or bad?
  2. What percentage of worldwide suffering is directly caused by mental illness and/or the hedonic treadmill rather than by external circumstances?
  3. Is there a way to “sabotage the hedonic treadmill”?
  4. Can benevolent and intelligent sentient beings be fully animated by gradients of bliss (offloading nociception to insentient mechanism)?
  5. Can we uproot the fundamental causes of suffering by tweaking our brain structure without compromising our critical thinking?
  6. Can consciousness technologies play a part in making the world a high-trust super-organism?

symmetries

Wallpaper symmetry chart with 5 different notations (slightly different diagram in handout)

If these questions intrigue you, you are likely to find the following readings valuable:

  1. Principia Qualia
  2. Qualia Computing So Far
  3. Quantifying Bliss: Talk Summary
  4. The Tyranny of the Intentional Object
  5. Algorithmic Reduction of Psychedelic States
  6. How to secretly communicate with people on LSD
  7. ELI5 “The Hyperbolic Geometry of DMT Experiences”
  8. Peaceful Qualia: The Manhattan Project of Consciousness
  9. Symmetry Theory of Valence “Explain Like I’m 5” edition
  10. Generalized Wada Test and the Total Order of Consciousness
  11. Wireheading Done Right: Stay Positive Without Going Insane
  12. Why we seek out pleasure: the Symmetry Theory of Homeostatic Regulation
  13. The Hyperbolic Geometry of DMT Experiences: Symmetries, Sheets, and Saddled Scenes

Who we are:
Qualia Research Institute (Michael Johnson & Andrés Gómez Emilsson)
Qualia Computing (this website; Andrés Gómez Emilsson)
Open Theory (Michael Johnson)

Printable version:

mental_health_as_ea_cause