AI Alignment Podcast: On Consciousness, Qualia, and Meaning with Mike Johnson and Andrés Gómez Emilsson

Lucas Perry from the Future of Life Institute recently interviewed my co-founder Mike Johnson and I in his AI Alignment podcast. Here is the full transcript:


Lucas: Hey, everyone. Welcome back to the AI Alignment Podcast. I’m Lucas Perry, and today we’ll be speaking with Andrés Gomez Emilsson and Mike Johnson from the Qualia Research Institute. In this episode, we discuss the Qualia Research Institute’s mission and core philosophy. We get into the differences between and arguments for and against functionalism and qualia realism. We discuss definitions of consciousness, how consciousness might be causal, we explore Marr’s Levels of Analysis, we discuss the Symmetry Theory of Valence. We also get into identity and consciousness, and the world, the is-out problem, what this all means for AI alignment and building beautiful futures.

And then end on some fun bits, exploring the potentially large amounts of qualia hidden away in cosmological events, and whether or not our universe is something more like heaven or hell. And remember, if you find this podcast interesting or useful, remember to like, comment, subscribe, and follow us on your preferred listening platform. You can continue to help make this podcast better by participating in a very short survey linked in the description of wherever you might find this podcast. It really helps. Andrés is a consciousness researcher at QRI and is also the Co-founder and President of the Stanford Transhumanist Association. He has a Master’s in Computational Psychology from Stanford. Mike is Executive Director at QRI and is also a co-founder.

He is interested in neuroscience, philosophy of mind, and complexity theory. And so, without further ado, I give you Mike Johnson and Andrés Gomez Emilsson. So, Mike and Andrés, thank you so much for coming on. Really excited about this conversation and there’s definitely a ton for us to get into here.

Andrés: Thank you so much for having us. It’s a pleasure.

Mike: Yeah, glad to be here.

Lucas: Let’s start off just talking to provide some background about the Qualia Research Institute. If you guys could explain a little bit, your perspective of the mission and base philosophy and vision that you guys have at QRI. If you could share that, that would be great.

Andrés: Yeah, for sure. I think one important point is there’s some people that think that really what matters might have to do with performing particular types of algorithms, or achieving external goals in the world. Broadly speaking, we tend to focus on experience as the source of value, and if you assume that experience is a source of value, then really mapping out what is the set of possible experiences, what are their computational properties, and above all, how good or bad they feel seems like an ethical and theoretical priority to actually make progress on how to systematically figure out what it is that we should be doing.

Mike: I’ll just add to that, this thing called consciousness seems pretty confusing and strange. We think of it as pre-paradigmatic, much like alchemy. Our vision for what we’re doing is to systematize it and to do to consciousness research what chemistry did to alchemy.

Lucas: To sort of summarize this, you guys are attempting to be very clear about phenomenology. You want to provide a formal structure for understanding and also being able to infer phenomenological states in people. So you guys are realists about consciousness?

Mike: Yes, absolutely.

Lucas: Let’s go ahead and lay some conceptual foundations. On your website, you guys describe QRI’s full stack, so the kinds of metaphysical and philosophical assumptions that you guys are holding to while you’re on this endeavor to mathematically capture consciousness.

Mike: I would say ‘full stack’ talks about how we do philosophy of mind, we do neuroscience, and we’re just getting into neurotechnology with the thought that yeah, if you have a better theory of consciousness, you should be able to have a better theory about the brain. And if you have a better theory about the brain, you should be able to build cooler stuff than you could otherwise. But starting with the philosophy, there’s this conception of qualia of formalism; the idea that phenomenology can be precisely represented mathematically. You borrow the goal from Giulio Tononi’s IIT. We don’t necessarily agree with the specific math involved, but the goal of constructing a mathematical object that is isomorphic to a systems phenomenology would be the correct approach if you want to formalize phenomenology.

And then from there, one of the big questions in how you even start is, what’s the simplest starting point? And here, I think one of our big innovations that is not seen at any other research group is we’ve started with emotional valence and pleasure. We think these are not only very ethically important, but also just literally the easiest place to start reverse engineering.

Lucas: Right, and so this view is also colored by physicalism and quality of structuralism and valence realism. Could you explain some of those things in a non-jargony way?

Mike: Sure. Quality of formalism is this idea that math is the right language to talk about qualia in, and that we can get a precise answer. This is another way of saying that we’re realists about consciousness much as people can be realists about electromagnetism. We’re also valence realists. This refers to how we believe emotional valence, or pain and pleasure, the goodness or badness of an experience. We think this is a natural kind. This concept carves reality at the joints. We have some further thoughts on how to define this mathematically as well.

Lucas: So you guys are physicalists, so you think that basically the causal structure of the world is best understood by physics and that consciousness was always part of the game engine of the universe from the beginning. Ontologically, it was basic and always there in the same sense that the other forces of nature were already in the game engine since the beginning?

Mike: Yeah, I would say so. I personally like the frame of dual aspect monism, but I would also step back a little bit and say there’s two attractors in this discussion. One is the physicalist attractor, and that’s QRI. Another would be the functionalist/computationalist attractor. I think a lot of AI researchers are in this attractor and this is a pretty deep question of, if we want to try to understand what value is, or what’s really going on, or if we want to try to reverse engineer phenomenology, do we pay attention to bits or atoms? What’s more real; bits or atoms?

Lucas: That’s an excellent question. Scientific reductionism here I think is very interesting. Could you guys go ahead and unpack though the skeptics position of your view and broadly adjudicate the merits of each view?

Andrés: Maybe a really important frame here is called Marr’s Levels of Analyses. David Marr was a cognitive scientist, wrote a really influential book in the ’80s called On Vision where he basically creates a schema for how to understand knowledge about, in this particular case, how you actually make sense of the world visually. The framework goes as follows: you have three ways in which you can describe an information processing system. First of all, the computational/behavioral level. What that is about is understanding the input-output mapping of an information processing system. Part of it is also understanding the run-time complexity of the system and under what conditions it’s able to perform its actions. Here an analogy would be with an abacus, for example.

On the computational/behavioral level, what an abacus can do is add, subtract, multiply, divide, and if you’re really creative you can also exponentiate and do other interesting things. Then you have the algorithmic level of analysis, which is a little bit more detailed, and in a sense more constrained. What the algorithm level of analysis is about is figuring out what are the internal representations and possible manipulations of those representations such that you get the input output of mapping described by the first layer. Here you have an interesting relationship where understanding the first layer doesn’t fully constrain the second one. That is to say, there are many systems that have the same input output mapping but that under the hood uses different algorithms.

In the case of the abacus, an algorithm might be something whenever you want to add a number you just push a bead. Whenever you’re done with a row, you push all of the beads backs and then you add a bead in the row underneath. And finally, you have the implementation level of analysis, and that is, what is the system actually made of? How is it constructed? All of these different levels ultimately also map onto different theories of consciousness, and that is basically where in the stack you associate consciousness, or being, or “what matters”. So, for example, behaviorists in the ’50s, they may associate consciousness, if they give any credibility to that term, with the behavioral level. They don’t really care what’s happening inside as long as you have extended pattern of reinforcement learning over many iterations.

What matters is basically how you’re behaving and that’s the crux of who you are. A functionalist will actually care about what algorithms you’re running, how is it that you’re actually transforming the input into the output. Functionalists generally do care about, for example, brain imaging, they do care about the high level algorithms that the brain is running, and generally will be very interested in figuring out these algorithms and generalize them in fields like machine learning and digital neural networks and so on. A physicalist associate consciousness at the implementation level of analysis. How the system is physically constructed, has bearings on what is it like to be that system.

Lucas: So, you guys haven’t said that this was your favorite approach, but if people are familiar with David Chalmers, these seem to be the easy problems, right? And functionalists are interested in just the easy problems and some of them will actually just try to explain consciousness away, right?

Mike: Yeah, I would say so. And I think to try to condense some of the criticism we have of functionalism, I would claim that it looks like a theory of consciousness and can feel like a theory of consciousness, but it may not actually do what we need a theory of consciousness to do; specify which exact phenomenological states are present.

Lucas: Is there not some conceptual partitioning that we need to do between functionalists who believe in qualia or consciousness, and those that are illusionists or want to explain it away or think that it’s a myth?

Mike: I think that there is that partition, and I guess there is a question of how principled the partition you can be, or whether if you chase the ideas down as far as you can, the partition collapses. Either consciousness is a thing that is real in some fundamental sense and I think you can get there with physicalism, or consciousness is more of a process, a leaky abstraction. I think functionalism naturally tugs in that direction. For example, Brian Tomasik has followed this line of reasoning and come to the conclusion of analytic functionalism, which is trying to explain away consciousness.

Lucas: What is your guys’s working definition of consciousness and what does it mean to say that consciousness is real.

Mike: It is a word that’s overloaded. It’s used in many contexts. I would frame it as what it feels like to be something, and something is conscious if there is something it feels like to be that thing.

Andrés: It’s important also to highlight some of its properties. As Mike pointed out, “consciousness” is used in many different ways. There’s like eight definitions for the word consciousness, and honestly, all of them are really interesting. Some of them are more fundamental than others and we tend to focus on the more fundamental side of the spectrum for the word. A sense that would be very not fundamental would be consciousness in the sense of social awareness or something like that. We actually think of consciousness much more in terms of qualia; what is it like to be something? What is it like to exist? Some of the key properties of consciousness are as follows: First of all, we do think it exists.

Second, in some sense it has causal power in the sense that the fact that we are conscious matters for evolution, evolution made us conscious for a reason that it’s actually doing some computational legwork that would be maybe possible to do, but just not as efficient or not as conveniently as it is possible with consciousness. Then also you have the property of qualia, the fact that we can experience sights, and colors, and tactile sensations, and thoughts experiences, and emotions, and so on, and all of these are in completely different worlds, and in a sense they are, but they have the property that they can be part of a unified experience that can experience color at the same time as experiencing sound. That sends those different types of sensations, we describe them as the category of consciousness because they can be experienced together.

And finally, you have unity, the fact that you have the capability of experiencing many qualia simultaneously. That’s generally a very strong claim to make, but we think you need to acknowledge and take seriously its unity.

Lucas: What are your guys’s intuition pumps for thinking why consciousness exists as a thing? Why is there a qualia?

Andrés: There’s the metaphysical question of why consciousness exists to begin within. That’s something I would like to punt for the time being. There’s also the question of why was it recruited for information processing purposes in animals? The intuition here is that there are various contrasts that you can have within experience, which can serve a computational role. So, there may be a very deep reason why color qualia or visual qualia is used for information processing associated with sight, and why tactile qualia is associated with information processing useful for touching and making haptic representations, and that might have to do with the actual map of how all the qualia values are related to each other. Obviously, you have all of these edge cases, people who are seeing synesthetic.

They may open their eyes and they experience sounds associated with colors, and people tend to think of those as abnormal. I would flip it around and say that we are all synesthetic, it’s just that the synesthesia that we have in general is very evolutionarily adaptive. The reason why you experience colors when you open your eyes is that that type of qualia is really well suited to represent geometrically a projective space. That’s something that naturally comes out of representing the world with the sensory apparatus like eyes. That doesn’t mean that there aren’t other ways of doing it. It’s possible that you could have an offshoot of humans that whenever they opened their eyes, they experience sound and they use that very well to represent the visual world.

But we may very well be in a local maxima of how different types of qualia are used to represent and do certain types of computations in a very well-suited way. It’s like the intuition behind why we’re conscious, is that all of these different contrasts in the structure of the relationship of possible qualia values has computational implications, and there’s actual ways of using this contrast in very computationally effective ways.

Lucas: So, just to channel the functionalist here, wouldn’t he just say that everything you just said about qualia could be fully reducible to input output and algorithmic information processing? So, why do we need this extra property of qualia?

Andrés: There’s this article, I believe is by Brian Tomasik that basically says, flavors of consciousness are flavors of computation. It might be very useful to do that exercise, where basically you identify color qualia as just a certain type of computation and it may very well be that the geometric structure of color is actually just a particular algorithmic structure, that whenever you have a particular type of algorithmic information processing, you get these geometric state-space. In the case of color, that’s a Euclidean three-dimensional space. In the case of tactile or smell qualia, it might be a much more complicated space, but then it’s in a sense implied by the algorithms that we run. There is a number of good arguments there.

The general approach to how to tackle them is that when it comes down to actually defining what algorithms a given system is running, you will hit a wall when you try to formalize exactly how to do it. So, one example is, how do you determine the scope of an algorithm? When you’re analyzing a physical system and you’re trying to identify what algorithm it is running, are you allowed to basically contemplate 1,000 atoms? Are you allowed to contemplate a million atoms? Where is a natural boundary for you to say, “Whatever is inside here can be part of the same algorithm, but whatever is outside of it can’t.” And, there really isn’t a frame-invariant way of making those decisions. On the other hand, if you ask to see a qualia with actual physical states, there is a frame-invariant way of describing what the system is.

Mike: So, a couple of years ago I posted a piece giving a critique of functionalism and one of the examples that I brought up was, if I have a bag of popcorn and I shake the bag of popcorn, did I just torture someone? Did I just run a whole brain emulation of some horrible experience, or did I not? There’s not really an objective way to determine which algorithms a physical system is objectively running. So this is a kind of an unanswerable question from the perspective of functionalism, whereas with the physical theory of consciousness, it would have a clear answer.

Andrés: Another metaphor here is, let’s say you’re at a park enjoying an ice cream. In this system that I created that has, let’s say isomorphic algorithms to whatever is going on in your brain, the particular algorithms that your brain is running in that precise moment within a functionalist paradigm maps onto a metal ball rolling down one of the paths within these machine in a straight line, not touching anything else. So there’s actually not much going on. According to functionalism, that would have to be equivalent and it would actually be generating your experience. Now the weird thing there is that you could actually break the machine, you could do a lot of things and the behavior of the ball would not change.

Meaning that within functionalism, and to actually understand what a system is doing, you need to understand the counter-factuals of the system. You need to understand, what would the system be doing if the input had been different? And all of a sudden, you end with this very, very gnarly problem of defining, well, how do you actually objectively decide what is the boundary of the system? Even some of these particular states that allegedly are very complicated, the system looks extremely simple, and you can remove a lot of parts without actually modifying its behavior. Then that casts in question whether there is an objective boundary, any known arbitrary boundary that you can draw around the system and say, “Yeah, this is equivalent to what’s going on in your brain,” right now.

This has a very heavy bearing on the binding problem. The binding problem for those who haven’t heard of it is basically, how is it possible that 100 billion neurons just because they’re skull-bound, spatially distributed, how is it possible that they simultaneously contribute to a unified experience as opposed to, for example, neurons in your brain and neurons in my brain contributing to a unified experience? You hit a lot of problems like what is the speed of propagation of information for different states within the brain? I’ll leave it at that for the time being.

Lucas: I would just like to be careful about this intuition here that experience is unified. I think that the intuition pump for that is direct phenomenological experience like experience seems unified, but experience also seems a lot of different ways that aren’t necessarily descriptive of reality, right?

Andrés: You can think of it as different levels of sophistication, where you may start out with a very naive understanding of the world, where you confuse your experience for the world itself. A very large percentage of people perceive the world and in a sense think that they are experiencing the world directly, whereas all the evidence indicates that actually you’re experiencing an internal representation. You can go and dream, you can hallucinate, you can enter interesting meditative states, and those don’t map to external states of the world.

There’s this transition that happens when you realize that in some sense you’re experiencing a world simulation created by your brain, and of course, you’re fooled by it in countless ways, especially when it comes to emotional things that we look at a person and we might have an intuition of what type of person that person is, and that if we’re not careful, we can confuse our intuition, we can confuse our feelings with truth as if we were actually able to sense their souls, so to speak, rather than, “Hey, I’m running some complicated models on people-space and trying to carve out who they are.” There’s definitely a lot of ways in which experience is very deceptive, but here I would actually make an important distinction.

When it comes to intentional content, and intentional content is basically what the experience is about, for example, if you’re looking at a chair, there’s the quality of chairness, the fact that you understand the meaning of chair and so on. That is usually a very deceptive part of experience. There’s another way of looking at experience that I would say is not deceptive, which is the phenomenal character of experience; how it presents itself. You can be deceived about basically what the experience is about, but you cannot be deceived about how you’re having the experience, how you’re experiencing it. You can infer based on a number of experiences that the only way for you to even actually experience a given phenomenal object is to incorporate a lot of that information into a unified representation.

But also, if you just pay attention to your experience that you can simultaneously place your attention in two spots of your visual field and make them harmonized. That’s a phenomenal character and I would say that there’s a strong case to be made to not doubt that property.

Lucas: I’m trying to do my best to channel the functionalist. I think he or she would say, “Okay, so what? That’s just more information processing, and i’ll bite the bullet on the binding problem. I still need some more time to figure that out. So what? It seems like these people who believe in qualia have an even tougher job of trying to explain this extra spooky quality in the world that’s different from all the other physical phenomenon that science has gone into.” It also seems to violate Occam’s razor or a principle of lightness where one’s metaphysics or ontology would want to assume the least amount of extra properties or entities in order to try to explain the world. I’m just really trying to tease out your best arguments here for qualia realism as we do have this current state of things in AI alignment where most people it seems would either try to explain away consciousness, would say it’s an illusion, or they’re anti-realist about qualia.

Mike: That’s a really good question, a really good frame. And I would say our strongest argument revolves around predictive power. Just like centuries ago, you could absolutely be a skeptic about, shall we say, electromagnetism realism. And you could say, “Yeah, I mean there is this thing we call static, and there’s this thing we call lightning, and there’s this thing we call load stones or magnets, but all these things are distinct. And to think that there’s some unifying frame, some deep structure of the universe that would tie all these things together and highly compress these phenomenon, that’s crazy talk.” And so, this is a viable position today to say that about consciousness, that it’s not yet clear whether consciousness has deep structure, but we’re assuming it does, and we think that unlocks a lot of predictive power.

We should be able to make predictions that are both more concise and compressed and crisp than others, and we should be able to make predictions that no one else can.

Lucas: So what is the most powerful here about what you guys are doing? Is it the specific theories and assumptions which you take are falsifiable?

Mike: Yeah.

Lucas: If we can make predictive assessments of these things, which are either leaky abstractions or are qualia, how would we even then be able to arrive at a realist or anti-realist view about qualia?

Mike: So, one frame on this is, it could be that one could explain a lot of things about observed behavior and implicit phenomenology through a purely functionalist or computationalist lens, but maybe for a given system it might take 10 terabytes. And if you can get there in a much simpler way, if you can explain it in terms of three elegant equations instead of 10 terabytes, then it wouldn’t be proof that there exists some crystal clear deep structure at work. But it would be very suggestive. Marr’s Levels of Analysis are pretty helpful here, where a functionalist might actually be very skeptical of consciousness mattering at all because it would say, “Hey, if you’re identifying consciousness at the implementation level of analysis, how could that have any bearing on how we are talking about, how we understand the world, how we’d behave?

Since the implementational level is kind of epiphenomenal from the point of view of the algorithm. How can an algorithm know its own implementation, all it can maybe figure out its own algorithm, and it’s identity would be constrained to its own algorithmic structure.” But that’s not quite true. In fact, there is bearings on one level of analysis onto another, meaning in some cases the implementation level of analysis doesn’t actually matter for the algorithm, but in some cases it does. So, if you were implementing a computer, let’s say with water, you have the option of maybe implementing a Turing machine with water buckets and in that case, okay, the implementation level of analysis goes out the window in terms of it doesn’t really help you understand the algorithm.

But if how you’re using water to implement algorithms is by basically creating this system of adding waves in buckets of different shapes, with different resonant modes, then the implementation level of analysis actually matters a whole lot for what algorithms are … finely tuned to be very effective in that substrate. In the case of consciousness and how we behave, we do think properties of the substrate have a lot of bearings on what algorithms we actually run. A functionalist should actually start caring about consciousness if the properties of consciousness makes the algorithms more efficient, more powerful.

Lucas: But what if qualia and consciousness are substantive real things? What if the epiphenomenonalist true and is like smoke rising from computation and it doesn’t have any causal efficacy?

Mike: To offer a re-frame on this, I like this frame of dual aspect monism better. There seems to be an implicit value judgment on epiphenomenalism. It’s seen as this very bad thing if a theory implies qualia as epiphenomenal. Just to put cards on the table, I think Andrés and I differ a little bit on how we see these things, although I think our ideas also mesh up well. But I would say that under the frame of something like dual aspect monism, that there’s actually one thing that exists, and it has two projections or shadows. And one projection is the physical world such as we can tell, and then the other projection is phenomenology, subjective experience. These are just two sides of the same coin and neither is epiphenomenal to the other. It’s literally just two different angles on the same thing.

And in that sense, qualia values and physical values are really talking about the same thing when you get down to it.

Lucas: Okay. So does this all begin with this move that Descartes makes, where he tries to produce a perfectly rational philosophy or worldview by making no assumptions and then starting with experience? Is this the kind of thing that you guys are doing in taking consciousness or qualia to be something real or serious?

Mike: I can just speak for myself here, but I would say my intuition comes from two places. One is staring deep into the beast of functionalism and realizing that it doesn’t lead to a clear answer. My model is that it just is this thing that looks like an answer but can never even in theory be an answer to how consciousness works. And if we deny consciousness, then we’re left in a tricky place with ethics and moral value. It also seems to leave value on the table in terms of predictions, that if we can assume consciousness as real and make better predictions, then that’s evidence that we should do that.

Lucas: Isn’t that just an argument that it would be potentially epistemically useful for ethics if we could have predictive power about consciousness?

Mike: Yeah. So, let’s assume that it’s 100 years, or 500 years, or 1,000 years in the future, and we’ve finally cracked consciousness. We’ve finally solved it. My open question is, what does the solution look like? If we’re functionalists, what does the solution look like? If we’re physicalists, what does the solution look like? And we can expand this to ethics as well.

Lucas: Just as a conceptual clarification, the functionalists are also physicalists though, right?

Andrés: There is two senses of the word physicalism here. So if there’s physicalism in the sense of like a theory of the universe, that the behavior of matter and energy, what happens in the universe is exhaustively described by the laws of physics, or future physics, there is also physicalism in the sense of understanding consciousness in contrast to functionalism. David Pearce, I think, would describe it as non-materialist physicalist idealism. There’s definitely a very close relationship between that phrasing and dual aspect monism. I can briefly unpack it. Basically non materialist is not saying that the stuff of the world is fundamentally unconscious. That’s something that materialism claims, that what the world is made of is not conscious, is raw matter so to speak.

Andrés: Physicalist, again in the sense of the laws of physics exhaustively describe behavior and idealist in the sense of what makes up the world is qualia or consciousness. The big picture view is that the actual substrate of the universe of quantum fields are fields of qualia.

Lucas: So Mike, you were saying that in the future when we potentially have a solution to the problem of consciousness, that in the end, the functionalists with algorithms and explanations of say all of the easy problems, all of the mechanisms behind the things that we call consciousness, you think that that project will ultimately fail?

Mike: I do believe that, and I guess my gentle challenge to functionalists would be to sketch out a vision of what a satisfying answer to consciousness would be, whether it’s completely explaining it a way or completely explaining it. If in 500 years you go to the local bookstore and you check out consciousness 101, and just flip through it, you look at the headlines and the chapter list and the pictures, what do you see? I think we have an answer as formalists, but I would be very interested in getting the functionalists state on this.

Lucas: All right, so you guys have this belief in the ability to formalize our understanding of consciousness, is this actually contingent on realism or anti realism?

Mike: It is implicitly dependent on realism, that consciousness is real enough to be describable mathematically in a precise sense. And actually that would be my definition of realism, that something is real if we can describe it exactly with mathematics and it is instantiated in the universe. I think the idea of connecting math and consciousness is very core to formalism.

Lucas: What’s particularly interesting here are the you’re making falsifiable claims about phenomenological states. It’s good and exciting that your Symmetry Theory of Valence, which we can get into now has falsifiable aspects. So do you guys want to describe here your Symmetry Theory of Valence and how this fits in and as a consequence of your valence realism?

Andrés: Sure, yeah. I think like one of the key places where this has bearings on is and understanding what is it that we actually want and what is it that we actually like and enjoy. That will be answered in an agent way. So basically you think of agents as entities who spin out possibilities for what actions to take and then they have a way of sorting them by expected utility and then carrying them out. A lot of people may associate what we want or what we like or what we care about at that level, the agent level, whereas we think actually the true source of value is more low level than that. That there’s something else that we’re actually using in order to implement agentive behavior. There’s ways of experiencing value that are completely separated from agents. You don’t actually need to be generating possible actions and evaluating them and enacting them for there to be value or for you to actually be able to enjoy something.

So what we’re examining here is actually what is the lower level property that gives rise even to agentive behavior that underlies every other aspect of experience. These would be a valence and specifically valence gradients. The general claim is that we are set up in such a way that we are basically climbing the valence gradient. This is not true in every situation, but it’s mostly true and it’s definitely mostly true in animals. And then the question becomes what implements valence gradients. Perhaps your intuition is this extraordinary fact that things that have nothing to do with our evolutionary past nonetheless can feel good or bad. So it’s understandable that if you hear somebody scream, you may get nervous or anxious or fearful or if you hear somebody laugh you may feel happy.

That makes sense from an evolutionary point of view, but why would the sound of the Bay Area Rapid Transit, the Bart, which creates these very intense screeching sounds, that is not even within like the vocal range of humans, it’s just really bizarre, never encountered before in our evolutionary past and nonetheless, it has an extraordinarily negative valence. That’s like a hint that valence has to do with patterns, it’s not just goals and actions and utility functions, but the actual pattern of your experience may determine valence. The same goes for a SUBPAC, is this technology that basically renders sounds between 10 and 100 hertz and some of them feel really good, some of them feel pretty unnerving, some of them are anxiety producing and it’s like why would that be the case? Especially when you’re getting two types of input that have nothing to do with our evolutionary past.

It seems that there’s ways of triggering high and low valence states just based on the structure of your experience. The last example I’ll give is very weird states of consciousness like meditation or psychedelics that seem to come with extraordinarily intense and novel forms of experiencing significance or a sense of bliss or pain. And again, they don’t seem to have much semantic content per se or rather the semantic content is not the core reason why they feel that they’re bad. It has to do more with a particular structure that they induce in experience.

Mike: There are many ways to talk about where pain and pleasure come from. We can talk about it in terms of neuro chemicals, opioids, dopamine. We can talk about it in terms of pleasure centers in the brain, in terms of goals and preferences and getting what you want, but all these have counterexamples. All of these have some points that you can follow the thread back to which will beg the question. I think the only way to explain emotional valence, pain and pleasure, that doesn’t beg the question is to explain it in terms of some patterns within phenomenology, just intrinsically feel good and some intrinsically feel bad. To touch back on the formalism brain, this would be saying that if we have a mathematical object that is isomorphic to your phenomenology, to what it feels like to be you, then some pattern or property of this object will refer to or will sort of intrinsically encode you are emotional valence, how pleasant or unpleasant this experiences.

That’s the valence formalism aspect that we’ve come to.

Lucas: So given the valence realism, the view is this intrinsic pleasure, pain axis of the world and this is sort of challenging I guess David Pearce’s view. There are things in experience which are just clearly good seeming or bad seeming. Will MacAskill called these pre theoretic properties we might ascribe to certain kinds of experiential aspects, like they’re just good or bad. So with this valence realism view, this potentiality in this goodness or badness whose nature is sort of self intimatingly disclosed in the physics and in the world since the beginning and now it’s unfolding and expressing itself more so and the universe is sort of coming to life, and embedded somewhere deep within the universe’s structure are these intrinsically good or intrinsically bad valances which complex computational systems and maybe other stuff has access to.

Andrés: Yeah, yeah, that’s right. And I would perhaps emphasize that it’s not only pre-theoretical, it’s pre-agentive, you don’t even need an agent for there to be valence.

Lucas: Right. Okay. This is going to be a good point I think for getting into these other more specific hairy philosophical problems. Could you go ahead and unpack a little bit more this view that pleasure or pain is self intimatingly good or bad that just by existing and experiential relation with the thing its nature is disclosed. Brian Tomasik here, and I think functionalists would say there’s just another reinforcement learning algorithm somewhere before that is just evaluating these phenomenological states. They’re not intrinsically or bad, that’s just what it feels like to be the kind of agent who has that belief.

Andrés: Sure. There’s definitely many angles from which to see this. One of them is by basically realizing that liking, wanting and learning are possible to dissociate, and in particular you’re going to have reinforcement without an associated positive valence. You can have also positive valence without reinforcement or learning. Generally they are correlated but they are different things. My understanding is a lot of people who may think of valence as something we believe matters because you are the type of agent that has a utility function and a reinforcement function. If that was the case, we would expect valence to melt away in states that are non agentive, we wouldn’t necessarily see it. And also that it would be intrinsically tied to intentional content, the aboutness of experience. A very strong counter example is that somebody may claim that really what they truly want this to be academically successful or something like that.

They think of the reward function as intrinsically tied to getting a degree or something like that. I would call that to some extent illusory, that if you actually look at how those preferences are being implemented, that deep down there would be valence gradients happening there. One way to show this would be let’s say the person on the graduation day, you give them an opioid antagonist. The person will subjectively feel that the day is meaningless, you’ve removed the pleasant cream of the experience that they were actually looking for, that they thought all along was tied in with intentional content with the fact of graduating but in fact it was the hedonic gloss that they were after, and that’s kind of like one intuition pump part there.

Lucas: These core problem areas that you’ve identified in Principia Qualia, would you just like to briefly touch on those?

Mike: Yeah, trying to break the problem down into modular pieces with the idea that if we can decompose the problem correctly then the sub problems become much easier than the overall problem and if you collect all the solutions to the sub problem than in aggregate, you get a full solution to the problem of consciousness. So I’ve split things up into the metaphysics, the math and the interpretation. The first question is what metaphysics do you even start with? What ontology do you even try to approach the problem? And we’ve chosen the ontology of physics that can objectively map onto reality in a way that computation can not. Then there’s this question of, okay, so you have your core ontology in this case physics, and then there’s this question of what counts, what actively contributes to consciousness? Do we look at electrons, electromagnetic fields, quarks?

This is an unanswered question. We have hypotheses but we don’t have an answer. Moving into the math, conscious system seemed to have boundaries, if something’s happening inside my head it can directly contribute to my conscious experience. But even if we put our heads together, literally speaking, your consciousness doesn’t bleed over into mine, there seems to be a boundary. So one way of framing this is the boundary problem and one way it’s framing it is the binding problem, and these are just two sides of the same coin. There’s this big puzzle of how do you draw the boundaries of a subject experience. IIT is set up to approach consciousness in itself through this lens that has a certain style of answer, style of approach. We don’t necessarily need to take that approach, but it’s a intellectual landmark. Then we get into things like the state-space problem and the topology of information problem.

If we figured out our basic ontology of what we think is a good starting point and of that stuff, what actively contributes to consciousness, and then we can figure out some principled way to draw a boundary around, okay, this is conscious experience A and this conscious experience B, and they don’t overlap. So you have a bunch of the information inside the boundary. Then there’s this math question of how do you rearrange it into a mathematical object that is isomorphic to what that stuff feels like. And again, IIT has an approach to this, we don’t necessarily ascribe to the exact approach but it’s good to be aware of. There’s also the interpretation problem, which is actually very near and dear to what QRI is working on and this is the concept of if you had a mathematical object that represented what it feels like to be you, how would we even start to figure out what it meant?

Lucas: This is also where the falsifiability comes in, right? If we have the mathematical object and we’re able to formally translate that into phenomenological states, then people can self report on predictions, right?

Mike: Yes. I don’t necessarily fully trust self reports as being the gold standard. I think maybe evolution is tricky sometimes and can lead to inaccurate self report, but at the same time it’s probably pretty good, and it’s the best we have for validating predictions.

Andrés: A lot of this gets easier if we assume that maybe we can be wrong in an absolute sense but we’re often pretty well calibrated to judge relative differences. Maybe you ask me how I’m doing on a scale of one to ten and I say seven and the reality is a five, maybe that’s a problem, but at the same time I like chocolate and if you give me some chocolate and I eat it and that improves my subjective experience and I would expect us to be well calibrated in terms of evaluating whether something is better or worse.

Lucas: There’s this view here though that the brain is not like a classical computer, that it is more like a resonant instrument.

Mike: Yeah. Maybe an analogy here it could be pretty useful. There’s this researcher William Sethares who basically figured out the way to quantify the mutual dissonance between pairs of notes. It turns out that it’s not very hard, all you need to do is add up the pairwise dissonance between every harmonic of the notes. And what that gives you is that if you take for example a major key and you compute the average dissonance between pairs of notes within that major key it’s going to be pretty good on average. And if you take the average dissonance of a minor key it’s going to be higher. So in a sense what distinguishes the minor and a major key is in the combinatorial space of possible permutations of notes, how frequently are they dissonant versus consonant.

That’s a very ground truth mathematical feature of a musical instrument and that’s going to be different from one instrument to the next. With that as a backdrop, we think of the brain and in particular valence in a very similar light that the brain has natural resonant modes and emotions may seem externally complicated. When you’re having a very complicated emotion and we ask you to describe it it’s almost like trying to describe a moment in a symphony, this very complicated composition and how do you even go about it. But deep down the reason why a particular frame sounds pleasant or unpleasant within music is ultimately tractable to the additive per wise dissonance of all of those harmonics. And likewise for a given state of consciousness we suspect that very similar to music the average pairwise dissonance between the harmonics present on a given point in time will be strongly related to how unpleasant the experience is.

These are electromagnetic waves and it’s not exactly like a static or it’s not exactly a standing wave either, but it gets really close to it. So basically what this is saying is there’s this excitation inhibition wave function and that happens statistically across macroscopic regions of the brain. There’s only a discrete number of ways in which that way we can fit an integer number of times in the brain. We’ll give you a link to the actual visualizations for what this looks like. There’s like a concrete example, one of the harmonics with the lowest frequency is basically a very simple one where interviewer hemispheres are alternatingly more excited versus inhibited. So that will be a low frequency harmonic because it is very spatially large waves, an alternating pattern of excitation. Much higher frequency harmonics are much more detailed and obviously hard to describe, but visually generally speaking, the spatial regions that are activated versus inhibited are these very thin wave fronts.

It’s not a mechanical wave as such, it’s a electromagnetic wave. So it would actually be the electric potential in each of these regions of the brain fluctuates, and within this paradigm on any given point in time you can describe a brain state as a weighted sum of all of its harmonics, and what that weighted sum looks like depends on your state of consciousness.

Lucas: Sorry, I’m getting a little caught up here on enjoying resonant sounds and then also the valence realism. The view isn’t that all minds will enjoy resonant things because happiness is like a fundamental valence thing of the world and all brains who come out of evolution should probably enjoy resonance.

Mike: It’s less about the stimulus, it’s less about the exact signal and it’s more about the effect of the signal on our brains. The resonance that matters, the resonance that counts, or the harmony that counts we’d say, or in a precisely technical term, the consonance that counts is the stuff that happens inside our brains. Empirically speaking most signals that involve a lot of harmony create more internal consonance in these natural brain harmonics than for example, dissonant stimuli. But the stuff that counts is inside the head, not the stuff that is going in our ears.

Just to be clear about QRI’s move here, Selen Atasoy has put forth this connectome-specific harmonic wave model and what we’ve done is combined it with our symmetry theory of valence and said this is sort of a way of basically getting a Fourier transform of where the energy is in terms of frequencies of brainwaves in a much cleaner way than has been available through EEG. Basically we can evaluate this data set for harmony. How much harmony is there in a brain, with the link to the Symmetry Theory of Valence then it should be a very good proxy for how pleasant it is to be that brain.

Lucas: Wonderful.

Andrés: In this context, yeah, the Symmetry Theory of Valence would be much more fundamental. There’s probably many ways of generating states of consciousness that are in a sense completely unnatural that are not based on the harmonics of the brain, but we suspect the bulk of the differences in states of consciousness would cash out in differences in brain harmonics because that’s a very efficient way of modulating the symmetry of the state.

Mike: Basically, music can be thought of as a very sophisticated way to hack our brains into a state of greater consonance, greater harmony.

Lucas: All right. People should check out your Principia Qualia, which is the work that you’ve done that captures a lot of this well. Is there anywhere else that you’d like to refer people to for the specifics?

Mike: Principia qualia covers the philosophical framework and the symmetry theory of valence. Andrés has written deeply about this connectome-specific harmonic wave frame and the name of that piece is Quantifying Bliss.

Lucas: Great. I would love to be able to quantify bliss and instantiate it everywhere. Let’s jump in here into a few problems and framings of consciousness. I’m just curious to see if you guys have any comments on ,the first is what you call the real problem of consciousness and the second one is what David Chalmers calls the Meta problem of consciousness. Would you like to go ahead and start off here with just this real problem of consciousness?

Mike: Yeah. So this gets to something we were talking about previously, is consciousness real or is it not? Is it something to be explained or to be explained away? This cashes out in terms of is it something that can be formalized or is it intrinsically fuzzy? I’m calling this the real problem of consciousness, and a lot depends on the answer to this. There are so many different ways to approach consciousness and hundreds, perhaps thousands of different carvings of the problem, panpsychism, we have dualism, we have non materialist physicalism and so on. I think essentially the core distinction, all of these theories sort themselves into two buckets, and that’s is consciousness real enough to formalize exactly or not. This frame is perhaps the most useful frame to use to evaluate theories of consciousness.

Lucas: And then there’s a Meta problem of consciousness which is quite funny, it’s basically like why have we been talking about consciousness for the past hour and what’s all this stuff about qualia and happiness and sadness? Why do people make claims about consciousness? Why does it seem to us that there is maybe something like a hard problem of consciousness, why is it that we experience phenomenological states? Why isn’t everything going on with the lights off?

Mike: I think this is a very clever move by David Chalmers. It’s a way to try to unify the field and get people to talk to each other, which is not so easy in the field. The Meta problem of consciousness doesn’t necessarily solve anything but it tries to inclusively start the conversation.

Andrés: The common move that people make here is all of these crazy things that we think about consciousness and talk about consciousness, that’s just any information processing system modeling its own attentional dynamics. That’s one illusionist frame, but even within qualia realist, qualia formalist paradigm, you still have the question of why do we even think or self reflect about consciousness. You could very well think of consciousness as being computationally relevant, you need to have consciousness and so on, but still lacking introspective access. You could have these complicated conscious information processing systems, but they don’t necessarily self reflect on the quality of their own consciousness. That property is important to model and make sense of.

We have a few formalisms that may give rise to some insight into how self reflectivity happens and in particular how is it possible to model the entirety of your state of consciousness in a given phenomenal object. These ties in with the notion of a homonculei, if the overall valence of your consciousness is actually a signal traditionally used for fitness evaluation, detecting basically when are you in existential risk to yourself or when there’s like reproductive opportunities that you may be missing out on, that it makes sense for there to be a general thermostat of the overall experience where you can just look at it and you get a a sense of the overall well being of the entire experience added together in such a way that you experienced them all at once.

I think like a lot of the puzzlement has to do with that internal self model of the overall well being of the experience, which is something that we are evolutionarily incentivized to actually summarize and be able to see at a glance.

Lucas: So, some people have a view where human beings are conscious and they assume everyone else is conscious and they think that the only place for value to reside is within consciousness, and that a world without consciousness is actually a world without any meaning or value. Even if we think that say philosophical zombies or people who are functionally identical to us but with no qualia or phenomenological states or experiential states, even if we think that those are conceivable, then it would seem that there would be no value in a world of p-zombies. So I guess my question is why does phenomenology matter? Why does the phenomenological modality of pain and pleasure or valence have some sort of special ethical or experiential status unlike qualia like red or blue?

Why does red or blue not disclose some sort of intrinsic value in the same way that my suffering does or my bliss does or the suffering or bliss of other people?

Mike: My intuition is also that consciousness is necessary for value. Nick Bostrom has this wonderful quote in super intelligence that we should be wary of building a Disneyland with no children, some technological wonderland that is filled with marvels of function but doesn’t have any subjective experience, doesn’t have anyone to enjoy it basically. I would just say that I think that most AI safety research is focused around making sure there is a Disneyland, making sure, for example, that we don’t just get turned into something like paperclips. But there’s this other problem, making sure there are children, making sure there are subjective experiences around to enjoy the future. I would say that there aren’t many live research threads on this problem and I see QRI as a live research thread on how to make sure there is subject experience in the future.

Probably a can of worms there, but as your question about in pleasure, I may pass that to my colleague Andrés.

Andrés: Nothing terribly satisfying here. I would go with David Pearce’s view that these properties of experience are self intimating and to the extent that you do believe in value, it will come up as the natural focal points for value, especially if you’re allowed to basically probe the quality of your experience where in many states you believe that the reason why you like something is for intentional content. Again, the case of graduating or it could be the the case of getting a promotion or one of those things that a lot of people associate, with feeling great, but if you actually probe the quality of experience, you will realize that there is this component of it which is its hedonic gloss and you can manipulate it directly again with things like opiate antagonists and if the symmetry theory of valence is true, potentially also by directly modulating the consonance and dissonance of the brain harmonics, in which case the hedonic gloss would change in peculiar ways.

When it comes to consilience, when it comes to many different points of view, agreeing on what aspect of the experience is what brings value to it, it seems to be the hedonic gloss.

Lucas: So in terms of qualia and valence realism, would the causal properties of qualia be the thing that would show any arbitrary mind the self-intimating nature of how good or bad an experience is, and in the space of all possible minds, what is the correct epistemological mechanism for evaluating the moral status of experiential or qualitative states?

Mike: So first of all, I would say that my focus so far has mostly been on describing what is and not what ought. I think that we can talk about valence without necessarily talking about ethics, but if we can talk about valence clearly, that certainly makes some questions in ethics and some frameworks in ethics make much more or less than. So the better we can clearly describe and purely descriptively talk about consciousness, the easier I think a lot of these ethical questions get. I’m trying hard not to privilege any ethical theory. I want to talk about reality. I want to talk about what exists, what’s real and what the structure of what exists is, and I think if we succeed at that then all these other questions about ethics and morality get much, much easier. I do think that there is an implicit should wrapped up in questions about valence, but I do think that’s another leap.

You can accept the valence is real without necessarily accepting that optimizing valence is an ethical imperative. I personally think, yes, it is very ethically important, but it is possible to take a purely descriptive frame to valence, that whether or not this also discloses, as David Pearce said, the utility function of the universe. That is another question and can be decomposed.

Andrés: One framing here too is that we do suspect valence is going to be the thing that matters up on any mind if you probe it in the right way in order to achieve reflective equilibrium. There’s the biggest example of a talk and neuro scientist was giving at some point, there was something off and everybody seemed to be a little bit anxious or irritated and nobody knew why and then one of the conference organizers suddenly came up to the presenter and did something to the microphone and then everything sounded way better and everybody was way happier. There was these very sorrow hissing pattern caused by some malfunction of the microphone and it was making everybody irritated, they just didn’t realize that was the source of the irritation, and when it got fixed then you know everybody’s like, “Oh, that’s why I was feeling upset.”

We will find that to be the case over and over when it comes to improving valence. So like somebody in the year 2050 might come up to one of the connectome-specific harmonic wave clinics, “I don’t know what’s wrong with me,” but if you put them through the scanner we identify your 17th and 19th harmonic in a state of dissonance. We cancel 17th to make it more clean, and then the person who will say all of a sudden like, “Yeah, my problem is fixed. How did you do that?” So I think it’s going to be a lot like that, that the things that puzzle us about why do I prefer these, why do I think this is worse, will all of a sudden become crystal clear from the point of view of valence gradients objectively measured.

Mike: One of my favorite phrases in this context is what you can measure you can manage and if we can actually find the source of dissonance in a brain, then yeah, we can resolve it, and this could open the door for maybe honestly a lot of amazing things, making the human condition just intrinsically better. Also maybe a lot of worrying things, being able to directly manipulate emotions may not necessarily be socially positive on all fronts.

Lucas: So I guess here we can begin to jump into AI alignment and qualia. So we’re building AI systems and they’re getting pretty strong and they’re going to keep getting stronger potentially creating a superintelligence by the end of the century and consciousness and qualia seems to be along the ride for now. So I’d like to discuss a little bit here about more specific places in AI alignment where these views might inform it and direct it.

Mike: Yeah, I would share three problems of AI safety. There’s the technical problem, how do you make a self improving agent that is also predictable and safe. This is a very difficult technical problem. First of all to even make the agent but second of all especially to make it safe, especially if it becomes smarter than we are. There’s also the political problem, even if you have the best technical solution in the world and the sufficiently good technical solution doesn’t mean that it will be put into action in a sane way if we’re not in a reasonable political system. But I would say the third problem is what QRI is most focused on and that’s the philosophical problem. What are we even trying to do here? What is the optimal relationship between AI and humanity and also a couple of specific details here. First of all I think nihilism is absolutely an existential threat and if we can find some antidotes to nihilism through some advanced valence technology that could be enormously helpful for reducing X-risk.

Lucas: What kind of nihilism or are you talking about here, like nihilism about morality and meaning?

Mike: Yes, I would say so, and just personal nihilism that it feels like nothing matters, so why not do risky things?

Lucas: Whose quote is it, the philosophers question like should you just kill yourself? That’s the yawning abyss of nihilism inviting you in.

Andrés: Albert Camus. The only real philosophical question is whether to commit suicide, whereas how I think of it is the real philosophical question is how to make love last, bringing value to the existence, and if you have value on tap, then the question of whether to kill yourself or not seems really nonsensical.

Lucas: For sure.

Mike: We could also say that right now there aren’t many good shelling points for global coordination. People talk about having global coordination and building AGI would be a great thing but we’re a little light on the details of how to do that. If the clear, comprehensive, useful, practical understanding of consciousness can be built, then this may sort of embody or generate new shelling points that the larger world could self organize around. If we can give people a clear understanding of what is and what could be, then I think we will get a better future that actually gets built.

Lucas: Yeah. Showing what is and what could be is immensely important and powerful. So moving forward with AI alignment as we’re building these more and more complex systems, there’s this needed distinction between unconscious and conscious information processing, if we’re interested in the morality and ethics of suffering and joy and other conscious states. How do you guys see the science of consciousness here, actually being able to distinguish between unconscious and conscious information processing systems?

Mike: There are a few frames here. One is that, yeah, it does seem like the brain does some processing in consciousness and some processing outside of consciousness. And what’s up with that, this could be sort of an interesting frame to explore in terms of avoiding things like mind crime in the AGI or AI space that if there are certain computations which are painful then don’t do them in a way that would be associated with consciousness. It would be very good to have rules of thumb here for how to do that. One interesting could be in the future we might not just have compilers which optimize for speed of processing or minimization of dependent libraries and so on, but could optimize for the valence of the computation on certain hardware. This of course gets into complex questions about computationalism, how hardware dependent this compiler would be and so on.

I think it’s an interesting and important long-term frame.

Lucas: So just illustrate here I think the ways in which solving or better understanding consciousness will inform AI alignment from present day until super intelligence and beyond.

Mike: I think there’s a lot of confusion about consciousness and a lot of confusion about what kind of thing the value problem is in AI Safety, and there are some novel approaches on the horizon. I was speaking with Stuart Armstrong the last EA global and he had some great things to share about his model fragments paradigm. I think this is the right direction. It’s sort of understanding, yeah, human preferences are insane. Just they’re not a consistent formal system.

Lucas: Yeah, we contain multitudes.

Mike: Yes, yes. So first of all understanding what generates them seems valuable. So there’s this frame in AI safety we call the complexity value thesis. I believe Eliezer came up with it in a post on Lesswrong. It’s this frame where human value is very fragile in that it can be thought of as a small area, perhaps even almost a point in a very high dimensional space, say a thousand dimensions. If we go any distance in any direction from this tiny point in this high dimensional space, then we quickly get to something that we wouldn’t think of as very valuable. And maybe if we leave everything the same and take away freedom, this paints a pretty sobering picture of how difficult AI alignment will be.

I think this is perhaps arguably the source of a lot of worry in the community, that not only do we need to make machines that won’t just immediately kill us, but that will preserve our position in this very, very high dimensional space well enough that we keep the same trajectory and that possibly if we move at all, then we may enter a totally different trajectory, that we in 2019 wouldn’t think of as having any value. So this problem becomes very, very intractable. I would just say that there is an alternative frame. The phrasing that I’m playing around with here it is instead of the complexity of value thesis, the unity of value thesis, it could be that many of the things that we find valuable, eating ice cream, living in a just society, having a wonderful interaction with a loved one, all of these have the same underlying neural substrate and empirically this is what affective neuroscience is finding.

Eating a chocolate bar activates same brain regions as a transcendental religious experience. So maybe there’s some sort of elegant compression that can be made and that actually things aren’t so starkly strict. We’re not sort of this point in a super high dimensional space and if we leave the point, then everything of value is trashed forever, but maybe there’s some sort of convergent process that we can follow that we can essentialize. We can make this list of 100 things that humanity values and maybe they all have in common positive valence, and positive valence can sort of be reverse engineered. And to some people this feels like a very scary dystopic scenario – don’t knock it until you’ve tried it – but at the same time there’s a lot of complexity here.

One core frame that the idea of qualia formalism and valence realism can offer AI safety is that maybe the actual goal is somewhat different than the complexity of value thesis puts forward. Maybe the actual goal is different and in fact easier. I think this could directly inform how we spend our resources on the problem space.

Lucas: Yeah, I was going to say that there exists standing tension between this view of the complexity of all preferences and values that human beings have and then the valence realist view which says that what’s ultimately good or certain experiential or hedonic states. I’m interested and curious about if this valence view is true, whether it’s all just going to turn into hedonium in the end.

Mike: I’m personally a fan of continuity. I think that if we do things right we’ll have plenty of time to get things right and also if we do things wrong then we’ll have plenty of time for things to be wrong. So I’m personally not a fan of big unilateral moves, it’s just getting back to this question of can understanding what is help us, clearly yes.

Andrés: Yeah. I guess one view is we could say preserve optionality and learn what is, and then from there hopefully we’ll be able to better inform oughts and with maintained optionality we’ll be able to choose the right thing. But that will require a cosmic level of coordination.

Mike: Sure. An interesting frame here is whole brain emulation. So whole brain emulation is sort of a frame built around functionalism and it’s a seductive frame I would say. If whole brain emulations wouldn’t necessarily have the same qualia based on hardware considerations as the original humans, there could be some weird lock in effects where if the majority of society turned themselves into p-zombies then it may be hard to go back on that.

Lucas: Yeah. All right. We’re just getting to the end here, I appreciate all of this. You guys have been tremendous and I really enjoyed this. I want to talk about identity in AI alignment. This sort of taxonomy that you’ve developed about open individualism and closed individualism and all of these other things. Would you like to touch on that and talk about implications here in AI alignment as you see it?

Andrés: Yeah. Yeah, for sure. The taxonomy comes from Daniel Kolak, a philosopher and mathematician. It’s a pretty good taxonomy and basically it’s like open individualism, that’s the view that a lot of meditators and mystics and people who take psychedelics often ascribe to, which is that we’re all one consciousness. Another frame is that our true identity is the light of consciousness, so to speak. So it doesn’t matter in what form it manifests, it’s always the same fundamental ground of being. Then you have the common sense view, it’s called closed individualism. You start existing when you’re born, you stop existing when you die. You’re just this segment. Some religions actually extend that into the future or past with reincarnation or maybe with heaven.

It’s the belief in ontological distinction between you and others while at the same time there is ontological continuity from one moment to the next within you. Finally you have this view that’s called empty individualism, which is that you’re just a moment of experience. That’s fairly common among physicists and a lot of people who’ve tried to formalize consciousness, often they converged on empty individualism. I think a lot of theories of ethics and rationality, like the veil of ignorance as a guide or like how do you define rational decision-making as maximizing the expected utility of yourself as an agent, all of those seem to implicitly be based on closed individualism and they’re not necessarily questioning it very much.

On the other hand, if the sense of individual identity of closed individualism doesn’t actually carve nature at its joints as a Buddhist might say, the feeling of continuity of being a separate unique entity is an illusory construction of your phenomenology that casts in a completely different light how to approach rationality itself and even self interest, right? If you start identifying with the light of consciousness rather than your particular instantiation, you will probably care a lot more about what happens to pigs in factory farms because … In so far as they are conscious they are you in a fundamental way. It matters a lot in terms of how to carve out different possible futures, especially when you get into these very tricky situations like, well what if there is mind melding or what if there is the possibility of making perfect copies of yourself?

All of these edge cases are really problematic from the common sense view of identity, but they’re not really a problem from an open individualist or empty individualist point of view. With all of this said, I do personally think there’s probably a way of combining open individualism with valence realism that gives rise to the next step in human rationality where we’re actually trying to really understand what the universe wants, so to speak. But I would say that there is a very tricky aspect here that has to do with game theory. We evolved to believe in close individualism. The fact that it’s evolutionarily adaptive is obviously not an argument for it being fundamentally true, but it does seem to be some kind of an evolutionarily stable point to believe of yourself as who you can affect the most directly in a causal way, if you define your boundary that way.

That basically gives you focus on the actual degrees of freedom that you do have, and if you think of a society of open individualists, everybody’s altruistically maximally contributing to the universal consciousness, and then you have one close individualist who is just selfishly trying to acquire power just for itself, you can imagine that one view would have a tremendous evolutionary advantage in that context. So I’m not one who just naively advocates for open individualism unreflectively. I think we still have to work out to the game theory of it, how to make it evolutionarily stable and also how to make it ethical. Open question, I do think it’s important to think about and if you take consciousness very seriously, especially within physicalism, that usually will cast huge doubts on the common sense view of identity.

It doesn’t seem like a very plausible view if you actually tried to formalize consciousness.

Mike: The game theory aspect is very interesting. You can think of closed individualism as something evolutionists produced that allows an agent to coordinate very closely with its past and future ourselves. Maybe we can say a little bit about why we’re not by default all empty individualists or open individualists. Empty individualism seems to have a problem where if every slice of conscious experience is its own thing, then why should you even coordinate with your past and future self because they’re not the same as you. So that leads to a problem of defection, and open individualism is everything is the same being so to speak than … As Andrés mentioned that allows free riders, if people are defecting, it doesn’t allow altruist punishment or any way to stop the free ride. There’s interesting game theory here and it also just feeds into the question of how we define our identity in the age of AI, the age of cloning, the age of mind uploading.

This gets very, very tricky very quickly depending on one’s theory of identity. They’re opening themselves up to getting hacked in different ways and so different theories of identity allow different forms of hacking.

Andrés: Yeah, which could be sometimes that’s really good and sometimes really bad. I would make the prediction that not necessarily open individualism in its full fledged form but a weaker sense of identity than closed individualism is likely going to be highly adaptive in the future as people basically have the ability to modify their state of consciousness in much more radical ways. People who just identify with narrow sense of identity will just be in their shells, not try to disturb the local attractor too much. That itself is not necessarily very advantageous. If the things on offer are actually really good, both hedonically and intelligence wise.

I do suspect basically people who are somewhat more open to basically identify with consciousness or at least identify with a broader sense of identity, they will be the people who will be doing more substantial progress, pushing the boundary and creating new cooperation and coordination technology.

Lucas: Wow, I love all that. Seeing closed individualism for what it was has had a tremendous impact on my life and this whole question of identity I think is largely confused for a lot of people. At the beginning you said that open individualism says that we are all one consciousness or something like this, right? For me in identity I’d like to move beyond all distinctions of sameness or differenceness. To say like, oh, we’re all one consciousness to me seems to say we’re all one electromagnetism, which is really to say the consciousness is like an independent feature or property of the world that’s just sort of a ground part of the world and when the world produces agents, consciousness is just an empty identityless property that comes along for the ride.

The same way in which it would be nonsense to say, “Oh, I am these specific atoms, I am just the forces of nature that are bounded within my skin and body” That would be nonsense. In the same way in sense of what we were discussing with consciousness there was the binding problem of the person, the discreteness of the person. Where does the person really begin or end? It seems like these different kinds of individualism have, as you said, epistemic and functional use, but they also, in my view, create a ton of epistemic problems, ethical issues, and in terms of the valence theory, if quality is actually something good or bad, then as David Pearce says, it’s really just an epistemological problem that you don’t have access to other brain states in order to see the self intimating nature of what it’s like to be that thing in that moment.

There’s a sense in which i want to reject all identity as arbitrary and I want to do that in an ultimate way, but then in the conventional way, I agree with you guys that there are these functional and epistemic issues that closed individualism seems to remedy somewhat and is why evolution, I guess selected for it, it’s good for gene propagation and being selfish. But once one sees AI as just a new method of instantiating bliss, it doesn’t matter where the bliss is. Bliss is bliss and there’s no such thing as your bliss or anyone else’s bliss. Bliss is like its own independent feature or property and you don’t really begin or end anywhere. You are like an expression of a 13.7 billion year old system that’s playing out.

The universe is just peopleing all of us at the same time, and when you get this view and you see you as just sort of like the super thin slice of the evolution of consciousness and life, for me it’s like why do I really need to propagate my information into the future? Like I really don’t think there’s anything particularly special about the information of anyone really that exists today. We want to preserve all of the good stuff and propagate those in the future, but people who seek a immortality through AI or seek any kind of continuation of what they believe to be their self is, I just see that all as misguided and I see it as wasting potentially better futures by trying to bring Windows 7 into the world of Windows 10.

Mike: This all gets very muddy when we try to merge human level psychological drives and concepts and adaptations with a fundamental physics level description of what is. I don’t have a clear answer. I would say that it would be great to identify with consciousness itself, but at the same time, that’s not necessarily super easy if you’re suffering from depression or anxiety. So I just think that this is going to be an ongoing negotiation within society and just hopefully we can figure out ways in which everyone can move.

Andrés: There’s an article I wrote it, I just called it consciousness versus replicators. That kind of gets to the heart of this issue, but that sounds a little bit like good and evil, but it really isn’t. The true enemy here is replication for replication’s sake. On the other hand, the only way in which we can ultimately benefit consciousness, at least in a plausible, evolutionarily stable way is through replication. We need to find the balance between replication and benefit of consciousness that makes the whole system stable, good for consciousness and resistant against the factors.

Mike: I would like to say that I really enjoy Max Tegmark’s general frame of you leaving this mathematical universe. One re-frame of what we were just talking about in these terms are there are patterns which have to do with identity and have to do with valence and have to do with many other things. The grand goal is to understand what makes a pattern good or bad and optimize our light cone for those sorts of patterns. This may have some counter intuitive things, maybe closed individualism is actually a very adaptive thing, in the long term it builds robust societies. Could be that that’s not true but I just think that taking the mathematical frame and the long term frame is a very generative approach.

Lucas: Absolutely. Great. I just want to finish up here on two fun things. It seems like good and bad are real in your view. Do we live in heaven or hell?

Mike: Lot of quips that come to mind here. Hell is other people, or nothing is good or bad but thinking makes it so. My pet theory I should say is that we live in something that is perhaps close to heaven as is physically possible. The best of all possible worlds.

Lucas: I don’t always feel that way but why do you think that?

Mike: This gets through the weeds of theories about consciousness. It’s this idea that we tend to think of consciousness on the human scale. Is the human condition good or bad, is the balance of human experience on the good end, the heavenly end or the hellish end. If we do have an objective theory of consciousness, we should be able to point it at things that are not human and even things that are not biological. It may seem like a type error to do this but we should be able to point it at stars and black holes and quantum fuzz. My pet theory, which is totally not validated, but it is falsifiable, and this gets into Bostrom’s simulation hypothesis, it could be that if we tally up the good valence and the bad valence in the universe, that first of all, the human stuff might just be a rounding error.

Most of the value, in this value the positive and negative valence is found elsewhere, not in humanity. And second of all, I have this list in the last appendix of Principia Qualia as well, where could massive amounts of consciousness be hiding in the cosmological sense. I’m very suspicious that the big bang starts with a very symmetrical state, I’ll just leave it there. In a utilitarian sense, if you want to get a sense of whether we live in a place closer to heaven or hell we should actually get a good theory of consciousness and we should point to things that are not humans and cosmological scale events or objects would be very interesting to point it at. This will give a much better clear answer as to whether we live in somewhere closer to heaven or hell than human intuition.

Lucas: All right, great. You guys have been super generous with your time and I’ve really enjoyed this and learned a lot. Is there anything else you guys would like to wrap up on?

Mike: Just I would like to say, yeah, thank you so much for the interview and reaching out and making this happen. It’s been really fun on our side too.

Andrés: Yeah, I think wonderful questions and it’s very rare for an interviewer to have non conventional views of identity to begin with, so it was really fun, really appreciate it.

Lucas: Would you guys like to go ahead and plug anything? What’s the best place to follow you guys, Twitter, Facebook, blogs, website?

Mike: Our website is qualiaresearchinstitute.org and we’re working on getting a PayPal donate button out but in the meantime you can send us some crypto. We’re building out the organization and if you want to read our stuff a lot of it is linked from the website and you can also read my stuff at my blog, opentheory.net and Andrés’ is @qualiacomputing.com.

Lucas: If you enjoyed this podcast, please subscribe, give it a like or share it on your preferred social media platform. We’ll be back again soon with another episode in the AI Alignment series.


Featured image credit: Alex Grey

All-Is-One Simulation Theory

Allen Saakyan asks in All-Is-One Simulation Theory 
“what is this simulation?”. Here are two interesting responses (lightly edited for clarity):

Rak Razam: It’s an interactive learning program. It sounds hippy-dippy when you say “your intention and your belief [becomes real if you ask for it]”, but if you really try this and focus your will, and you put out your intention… it does work! You know? It is not just a “manifest the right card” kind of thing. But it is, rather: how we have different capabilities within our wetware, and most of the Western culture is focused on the egoic navigation of survival pathways and hierarchical climbing. We have these almost magical capabilities of intuition, which is not the intellectual ego. It’s listening to that broadcast signal for how to connect to the larger web of information that is always being broadcast. We have the imagination which in old magical understanding is sort of your ability to carve out the probability pathways. We are connected to the universal intelligence which has manifested life. And it is listening to us because it IS us. Right? It’s just as that single cell organism, as soon as it replicates from outside of space-time as that singularity that many different world religions believes is the G. O. D., or the source, or some may call it Samadhi, or whatever you call it. There is this idea that there is this originating source, which in quantum physics I call it the implicate. And the question is “why?” Why would we have simulations at all? In many of these religious cultures or spiritual understandings… in the beginning was the word, and the word was with God, and the word was God, but the word is a vibration, and a vibration in my understanding of the shamanic realms is unconditional love. It’s the highest vibratory expression of divine being. That vibration radiates out and then condenses down into what we call the explicate, or, this simulated reality. But it is actually not separate. It is like birthing itself into this creation. Why? I believe to create more love, because it needs a vessel within space-time to set its roots down to make more of itself, because it’s all there is. And some people say God is lonely, or whatever, we project these human conceits on source consciousness. I don’t think it’s lacking; I think it is so abundant that the infinite vibration of itself, which is everything, is such that everything can’t be more than everything so it needs to come down in space-time to create vessels to replicate itself and then we have division and we have all this stuff. So the simulation is like the wrapper with that creamy center. And it’s all about love.


Teafaerie: I think of it as being a work of art. It is a playful thing. It is self-generating and making beautiful forms for its own appreciation. Anything you say “it’s like this”. It is not that. It’s a thing that occurs in here, that’s a metaphor. Is it a school? A test? A trap? A prison? It is none of these things. The beauty of this piece is that you can hold it this way and it is an epic adventure story, and you hold it this way and it is a tragic farce, and if you hold it this way it is a romantic comedy. Most people think it’s a school. “Why did this happen to me? So I would learn something, right?”. I DO learn stuff, but I prefer to think of it as a massively multiplayer game, and a collective work of art. So I am here to play my character and to participate in the collective work of art. And that gives you an orientation for why am I here. And with this overarching narrative you can take that lens out and put another lens in because it is God’s own truth. We don’t know anything about the simulators. They could be planet-sized quantum computers working together. We don’t know what they want us not to do with our willies. It’s just… you can hold it anyway you’d like to. I just think “massively multiplayer game” is a good metaphor because it’s fertile, and flexible, and aesthetically satisfying, and creates for it action, and it’s fun and it is not scary. We wouldn’t be here if this didn’t get 4-stars in universes.com, in this model. We are freaking the costumer in this model. This is to amuse such as ourselves. Made by billions, played by trillions.


Analysis

The conversations in this video are the state-of-the-art in thinking about DMT-like states of consciousness from a phenomenological and theoretical point of view. Everyone on this panel has used ayahuasca and/or vaped DMT dozens if not hundreds of times, as well as guided trips for dozens of people. They also know an extended network of others with wide amount of experience with the state, and have been exposed to all of the major memes in the transpersonal space and mystical traditions. So what they say is likely representative of the frameworks that are used at the top of the “knowledge hierarchy” when it comes to genuine acquaintance with the phenomenal universe of DMT.

Now, I am an indirect realist about perception myself, and I think that we are in basement reality (sadly). The suffering of this world is enormous and brutal, and the stakes of thinking we are in a loving simulation are very high and real. Every day we don’t work towards eliminating suffering is a day millions of agonizing hours could have been prevented. Hence my resistance to beliefs like “we are in a simulation that is meant to help us learn and grow!” Hopefully! But let’s make sure we don’t screw up in case that’s not true.

That said, I do think DMT-like consciousness is of profound significance and may play a key role in eliminating suffering, on two fronts:

1) These states of consciousness are remarkable for creating extremely compelling renderings of one’s metaphysics, which can lead to “leveling up” one’s models of the world. And,

2) I also do think the “vibration of love” is a thing, in terms of quality of a state of consciousness, which is present in plentiful amounts on good DMT experiences.

My core question in this space now would be: How do we hone in on the beautiful consonant high-energy metta (loving kindness) qualia disclosed by these states, and study it scientifically for the benefit of all sentient beings?

Pure Land Youtube Hits

Leaked “Greatest Video Hits” from the video-sharing equivalent to Youtube in the servers of the Pure Land of Amitābha Buddha (they were using AWS with bad security):

  1. “60 Seconds to Enlightenment: How to create a thought-form that achieves the 3 stages of emptiness in 1 minute or less.” (this is a little documentary from vimeo star “Buddhamax” and record-holder for enlightening thought-forms as fast as possible from a deluded state all the way to a transcendent state)
  2. “We Are The Mandala, We Are The Form” (a call to Boddhisatvas from all corners of the Mandala to donate some good karma to the relief efforts after a large comet stuck a planet in the equivalent of the Jurassic period for that evolutionary timeline)
  3. “Dance Dance Blast Blast” (this is a short that shows a thriller/comedy about Mandila, a soul-based Deva world 2 levels below Pure Lands where a little soul finds out it has a strange innate talent to make music-taste thought-forms and uses them to decode the structure of its world and hack its way out of it Matrix-style)

Transcending and Including Integral Theory

I have one major rule: everybody is right. More specifically, everybody — including me — has some important pieces of the truth, and all of those pieces need to be honored, cherished, and included in a more gracious, spacious, and compassionate embrace.

Introduction, Collected Works of Ken Wilber, vol. VIII (2000)

Ken Wilber recently commented on Jordan Peterson for 1 hour and 20 minutes in this interview. You can probably gain about 80% of the value in the video by watching the first 20 minutes. Using ribbonfarm‘s signature concept handle “refactoring perception”*, we could say that Ken Wilber refactors Jordan Peterson in Integral Theoretic frameworks. His affinity for Peterson is the result of interpreting his actions as those of someone who sees the world through Integral lenses, in the sense that he acknowledges the partial truths of each level up to and including Teal.

Is the Integral Theory meme-plex capable of absorbing Jordan Peterson’s sphere of influence? Probably not, because as Ken Wilber might put it, Peterson followers are a mixture of Teal (Integral), Orange (Modern), and Amber (Traditional) people all pulling together against the memetic totalitarianism of the Green (Post-Modern) developmental stage. That said, we could perhaps anticipate a degree of memetic revival of Integral Theory thanks to its compatibility with Petersonism.

Altitudes-of-Development

What do I personally think of the Integral meme-plex? I see it as an upgrade relative to current mainstream worldviews. Alas, in my experience interacting with people who really dig that worldview (of which there are plenty in the Bay Area consciousness development/hacking space), I’ve encountered strong resistance against some of the core values and perspectives that QRI‘s Qualia/Valence meme-plex brings:

Integral Theorists tend to dismiss concerns about wild animal suffering, the genetic roots of suffering, and the possibility of identifying the physico-mathematical signature of bliss– which they might dismiss as a Modernist fantasy(!).

To upgrade the Integral Theory meme-plex, I’d emphasize the following:

  1. The Tyranny of the Intentional Object (which has material bearings on how we interpret the nature of “mystical” states, e.g. Jhanas are not so much ‘spiritual’ as glorified high-valence states with long-term mental health benefits thanks to neural annealing).
  2. That Open Individualism is consistent with (and indeed even implied by) monistic physicalism.
  3. And that the goals of transhumanism (superhappiness, superintelligence, and superlongevity) are indeed a direct implication of systematizing ethics (rather than being driven by egoic structures, as swiftly assumed by most).

*As of March of 2019 they seem to have moved on to “constructions in magical thinking.”

The Resonance and Vibration of [Phenomenal] Objects

Extract from “Many Voices, One Mission” by Michael G. Reccia, R. Jane Kneen

25th February, 2007

[Jane recalls: This clairvoyant address from Silver Star was given to Michael and myself one Sunday morning on our return home after he had accompanied us on a walk around the local reservoir.]

Silver Star: I want to talk about resonance and vibration, particularly in the spirit world. What I am going to talk about also applies to this level but cannot be picked up with earthly senses.

Everything in the spirit world has a sound, a colour, a tonality of Light and a degree of sentience. If I place a box in front of you on an earthly level it is simply a box …a utilitarian, functional, square structure built to hold something. If I gave you a box from the spirit worlds, that box has been created by thought. So, if I have a box in my house in the spirit world and turn my attention to it, that box will have a colour value. It is not just a box but is also a vibration of Light, and it is that vibration of Light that gives it its ‘solidity’. It will have a particular colour according to the purpose I assign to it. And, because the spiritual atoms in the box are vibrating at a certain rate, that box also has a sound value if I choose to tune in to it, which will be pleasing to the ear because the box has been created in one of the worlds of Light. That box is made from energy so as well as its colour – which will change dependent on what I want the box to do – it also has a luminance.

So, every object in the spirit realm has a colour (even a seemingly transparent box has a colour value) …it has a vibration of sound …and it has a perfume. The box stimulates all the senses we had on Earth so we can see, hear, touch and smell it.

Michael: Why would it have a perfume, Silver Star?

Silver Star: All the senses can be used as a spirit. Even though there is no dense atmosphere like on Earth, the sense of smell is stimulated by any object. There are exquisite perfumes here and each object has a unique scent dependent upon its complexity and intention of purpose. There is also a background aroma from the landscape and a subtle fragrance with people as well.

All your senses work at once, spiritually speaking, which is why, when Michael tries to describe spirit communication, he says it is like a beam of energy that carries within it a huge amount of information. Each object here conveys a lot of information …it has Light …it has a colour value …it has a sound value …and it has a perfume.

You can tune out these things, dependent on which of them you want to sense, or you can experience all of them at once. You can (as in your drug ‘culture’ on Earth) smell colours. You can experience emotion through colours and so feel warm or cold or happy or sad through colours. You can hear the symphony of Creation that runs through everything, and yet it can be individualised so that you only hear it coming out of the ‘box’ or just from the atmosphere around you …or you can hear it coming from the whole of the sphere you are in at the time.

Discord also exists in the Lower Astral because people there are still God-like in potential and what they imagine takes form. They think up discordant images and those images have vibrations that clash …that have a disagreeable odour …that exhibit violent colours …that carry sounds that are an assault on the senses. The spirits in the Lower Astral can feel and sense their havoc that, whilst on Earth, they thought they were creating in secret.

As an example of what I am trying to put across, let us take the simple act of one person visiting a friend with the express intent of spreading discord by voicing disharmonious views about a mutual acquaintance. On an earthly level all the person can be seen to have done is travel from point A to point B, indulge in malicious gossip, and then return home with no apparent retribution or consequence for having done that.

On a spiritual level, however, as soon as that person decides to undertake that malevolent course of action, they create jagged thoughts around themselves and attract towards themselves similar spirits from the Lower Astral. So, accompanying them unseen on that journey are spirits who think that this is a fun thing to do, because their vibrations are similar to the intentions of the soul on Earth wishing to cause trouble.

In contrast, the person who is sitting in meditation creates thoughts of Light and balance, and creates spheres (like the ones photographed over your door [reference to the coloured orbs that I had unexpectedly captured on my camera]), which have lovely, balanced colours and beautiful harmonies and sounds. Whereas the person who is acting negatively towards someone else creates a ‘thunderstorm’ around themselves with roiling clouds of darkness and disharmony within that bubble. Those disharmonies are then recorded in the karmic pattern of the person’s life, inhibiting their vibration so that, when they pass to spirit, they have to rid themselves of that thought and intention (even though they might not have thought about it for years) before they can raise their vibrations enough to fully appreciate the worlds of Light.

So, just as there are harmonious colours in Creation, there are also disharmonious colours; and just as there are harmonious sounds there are sounds – like clashing cymbals – of discord and disharmony.

Many illnesses on Earth are caused by the body having to react on a subconscious level to the constant battering of disharmony that the spirit has created around itself, year in, year out. The cells within a person’s body also vibrate, have a sound- and Light-quality and are designed to be harmonious but can be overcome by the bombardment of dominant negative thoughts from the person. That dulls or extinguishes the Light within the cells, resulting in the cells becoming unhealthy because they are not receiving the harmony from God that should reach them …so thick are the thoughts around the person and their subconscious.

In spiritual work you will often detect a smell …such as cigarettes, earth or perfume. The part of the wavelength that affects smell is easier to manipulate from one of the spirit worlds than sight or hearing, so very often a discarnate soul will convey a smell to the person on the Earth plane they wish to contact because it is the easiest way (relatively speaking) to do so from the spirit worlds.

So, if we come back to my box – a simple object like that is really a riot of colour, sound and intention but so is everything on Earth …as above, so below. The chair you are sitting on has its own harmony, its own energy-signature and a colour that is quite apart from the colour vibration you can see on the Earth plane. That vibration or low level of sound is harmonising with the other sounds in your room – such as the sounds that the chairs and the television are giving out; and electricity gives out a very specific sound-signature spiritually. There are colours that are emanating in your room on a spiritual level and perfume vibrations given out by various objects. Flowers, for example, give off perfume on a low level on the Earth but their perfume as a glorification of God on a spiritual level is wonderful to experience.

All this is going on in everyone’s house and this is why a medium can enter someone’s room and instantly feel threatened because of the chaos picked up there, or in somebody else’s room – like this one – feel at peace because peace is in the atmosphere. On a subconscious level you, as souls, pick up the Light-signature, the colour-signature, the sound and the perfume from the objects within a room… and, indeed, from people themselves.

It is not a cacophony in our world because everything is harmonious and we can tune in or tune out those particular vibrations, just as you would on a television. If we do not wish to hear the background noise of the universe we tune it out – knowing that it is always there if we wish to tune back in to it. If we don’t wish to see beyond the physical objects that we have (like the box) and don’t want to see its colour or Light, we tune out of that particular wavelength so we are just looking at the object.

The reason I am telling you this is to make you aware of the fact that the universe is vibrant and alive. There is no such thing as a ‘dead object’ because the last quality my box exhibits is a degree of sentience. It is built out of God-ether …the substance of the universe… and, therefore, is alive. I can influence the box with my moods and so it is on Earth. You influence your immediate surroundings with your moods and, because those surroundings are, in effect, alive, they react to the way that you are feeling.

Objects on Earth should be blessed periodically and dedicated back to God so that their vibration is raised and they will then serve you rather than drag you down by exhibiting depressive tendencies that they have picked up from yourselves! This is why you should bless your houses. Anything new that is brought into your house should be blessed because you don’t know the history of that object. You don’t know who has touched it before and what their thoughts or intentions have been. Everything should be blessed and dedicated to God …so should the food you eat …and the thoughts you think. You should say: ‘Father, this morning I pray that my thoughts be of the highest quality and worthy of You‘ and then you are not thinking things that will damage you.

I would like to leave you the box to think about. It is just a box that I have created from nothing by thinking, ‘I want a box!’ …What colour is it? …What is its intention? …What are you going to use it for? …How much Light does it exhibit?

It is up to you!

It depends what you put into that box.

That box is symbolic of every thought you think, everything that is in your house and everything you use – from your car to your telephone – and, just as I can influence that box, you can influence the things around you. That is why the Persian Gentleman said some weeks ago [reference to a private communication] that you should bless and thank the objects in your house because they react at a God-level to your approach to them. If you love them they become filled with Love; if you hate them they become filled with hate.

So, I have tried to bring through something of the abstractions of our world and pin them down through physical speech. Our world is a world of Light, of colour, of perfume, of sound …and so is yours.

Bless your days …bless each other …bless the objects, then you are putting Light into them as co-creators of those objects, which is what you will eventually be. You might not have physically made a chair today but one day with the power of your mind you will. You might not have physically built a house but you are building the house you will go to in the spirit realms by your thoughts whilst on Earth. So, be aware that you are creating, even though this is a pool of vibration where other people create physical objects for you. What you put into those objects you create yourself and get heaven or hell from them dependent on your motives and thoughts.

Would Jane would like to ask a question about anything I have just said?

Jane: Do objects emit music as well?

Silver Star: To create heat you stir up molecules – that is heat. Heat is molecules hitting each other because they have become agitated. To create any object, you formulate the molecules so that they become that object temporarily. Once formulated, however, they are not in stasis but in motion. If something is in stasis it decays; therefore, there must be movement. So, dashing about within the box that I have created are molecules that give the illusion of the box being ‘solid’. They are reacting to each other and – as when you rub your finger around the rim of a wine glass – it creates a sound. You cannot hear it on this level but you can on ours. Every object has a tone because of the way in which the molecules within it are moving. Does that make sense?

Jane: Yes, thank you.

Silver Star: Was there anything else?

Jane: No.

Silver Star: Then I will leave you, although I never leave you on a Sunday and, in particular, I never leave Jane [reference to Silver Star’s role as my principal guide]. I leave Michael to my colleague, the Persian Gentleman [reference to P.G. being Michael’s guide], and God bless you for the work you have done this week on behalf of us all – and for the work you will continue to do.


Analysis

Is there any value in considering the experiences described in the text above? With a physicalist ontology paired with inferential realism about perception, we are compelled to conclude that when people report traveling to heaven worlds and seeing objects that make signature multi-sensory music, they are really reporting on the quality of hallucinations. Alas, this is not enough to dismiss these reports as useless. Why? Because they may have some key information about how phenomenology works, and specifically, about valence structuralism. Let me explain.

The experience of going to a phenomenal world where the objects resonate and produce notes in all sensory channels has been reported by a number of people from various traditions (e.g. how Buddhas are said to emit blissful vibrations in the pure abodes, flowers making music in heaven as described by a gnostic “medium” (The process of dying), the sound-sight complementary nature of Mandalas and Mantras*, etc.). As I’ve said elsewhere, “desiring that the universe be turned into Hedonium is the straightforward implication of realizing that everything wants to become music”. So when people say things like this, listen. They may be reporting back on glimpses they’ve had of radically enhanced modes of being, whether or not they are really gaining privileged access to an external transcendent world of consciousness.

Gaining Root Access to Your World Simulation

Let’s examine the phenomenology described under the theoretical paradigms developed at the Qualia Research Institute. Some core paradigms are Qualia Formalism (“every conscious experience corresponds to a mathematical object such that the mathematical features of that object are isomorphic to the phenomenology of the experience”), Valence Structuralism (“pain and pleasure are structural features of the mathematical object that corresponds to an experience such that they can be read off from this object with the appropriate mathematical analysis”), and the Symmetry Theory of Valence (“the mathematical feature that corresponds to pain and pleasure are the object’s symmetry and anti-symmetry, namely, its invariance upon the transformations the object is undergoing”). Whereas Qualia Formalism is a necessary assumption to make in order to make any progress on the science of consciousness (cf. Qualia Formalism in the Water Supply), Valence Structuralism and the Symmetry Theory of Valence are currently still mere hypotheses whose truth will be determined empirically by testing the predictions they generate. For now, we rely on strong, but admittedly circumstantial, pieces of evidence. The phenomenology reported of the connection between harmony and bliss in heaven worlds is one of these pieces of evidence. But can we do better? Not everyone can access these so-called heaven worlds of experience. So what can we do instead? Thankfully, there is a way…

Indeed, as carefully catalogued by Steven Lehar in his book about altered states “The Grand Illusion“, combining dissociatives and psychedelics produces an altogether new kind of experience different than the experience of either alone. His discovery first came by combining DXM and THC, which he says has a decent chance of giving rise to a free-wheeling hallucination, meaning that one can control the contents of hallucinatory experiences. For example, rather than being at the mercy of one’s hallucinatory world, under such a state, you could choose to fly on an airplane, travel the cosmos, or even go to Burning Man, and your mind will render a world-simulation in which you are doing such things.

Lehar, famously, is a proponent of indirect realism about perception. Hence he does not confuse those experiences with a transcendent access to an etheric plane or literal other dimensions. Rather, he points out, what underlies the phenomenal character of one’s experience is made by patterns of harmonic resonance interfering with each other within the confines of one’s own brain. The illusion of “seeing things outside of oneself” is due to the fact that the experience we have seems to be of a 3D space. But in reality, what is happening is that you are a 2.5D diorama-like projective space that represents a homunculus looking at a 3D space:

This slideshow requires JavaScript.

In the book, Lehar discusses how one can study some of the key parameters of one’s world-simulation in such a state. By imagining/requesting/generating lenses, diffraction gratings, and mirrors in your world-simulation during a free-wheeling hallucination you can explore the ray-tracing algorithms used by your brain to render your experience in normal circumstances. Just as the best way to figure out how a videogame engine works is to break it with corner cases, to find how your brain builds your world-simulation, overloading the simulation with difficult-to-render elements is highly useful.

What algorithms does your mind use for ray-tracing? According to an anonymous reader (whom we shall call “R”), the DXM + THC state allows you to explore precisely this question. From the conversations with R, it seems that ray-tracing follows a repeating two-step top-down and bottom-up recursive process in order to draw your experience. Amodal percepts are first constructed and set in place in order to build a projective frame (e.g. isometric projection, fish eye lens projection, etc.), which is followed by the “lighting” of that amodal space with modal qualia (color, scent, touch, etc.) in order to increase the definition of the rendered scene. Then an algorithm kicks in that figures out which is the most under-constrained region of the world-simulation, and rendering begins there, again with an amodal frame followed by modal filling-in, and so on, and the process repeats until you reach reflective-equilibrium.

The amodal step allows you to explore the range of possible projective transformations of your world-simulation. It is at this step that you can explore the rendering of cinematographic camera effects, such as lens flares and the Hitchcock Zoom. Unlike pure psychedelic states (e.g. LSD, psilocybin, etc.) on dissociatives, projective transformations seem to have a certain gravity to them. It’s as if the entire scene was built as a model on a platform suspended with ropes and springs. To turn the space around (i.e. change the projection) you can “add weight” to some of its parts, and then let the ropes readjust the orientation of the model to balance it out again. This is why dissociative projective hallucinations have a characteristic initial acceleration ramp-up, similar to what it is like to give a push to a ceramic turn table with a heavy vase. It moves very slowly at first, but once it gets going it keeps spinning until you stop it, which also has a characteristic deceleration dynamic. On the free-wheeling hallucination state one is like that; the world-simulation model can be pushed around to change the projection, and when you do so it travels at constant speed until it decelerates before its reposition. During the modal step, effects can be added such as wind, dust, rain, hail, liquid, gelatin, etc. These are all applied one at a time in a relatively legible way, with physicalized rendering of e.g. the filling up of a tank, parts of wood combusting with a low-grade fire propagating at constant speed, an ocean filled with water increasing its viscosity until it becomes lava, etc. A very valuable follow-up research that could be done at e.g. a Super-Shulgin Academy of rational psychonauts would be to get people to study SIGGRAPH algorithms and then try to replicate them during free-wheeling hallucinations. Perhaps not so surprisingly, graphics researchers are notorious for using psychedelics**.

Some people may want to say that there is nothing special about these states… we already have lucid dreaming after all, right? Alas, that is a very misguided analogy. Yes, lucid dreams can be used to explore unusual phenomenal configurations, but the scenes experienced are very hard to stabilize and examine in detail over the course of minutes (rather than seconds). The features of free-wheeling hallucinations along with their generic emotional intensity make them not comparable to most moments of life, or even lucid dreams. The texture of dissociative+psychedelic states is drastically different from lucid dreaming, and so is the degree of controllability of their content in precise and measured ways (not to say that dream music cannot be very emotional, but we are talking of something that is on another level altogether). That said, the state has clear limitations, too. Lehar points out, for instance, that looking at the control panel of an airplane during a free-wheeling hallucination reveals that you cannot have as many buttons as you would have in real life. Likewise, the projective transformations are restricted in some ways. A specific example would be trying to simulate going to Las Vegas. There you will find that jumping off a building is not possible because that would require a projective transformation where the distance covered grows quadratically as a function of time. Instead, jumping off a building will be approximated by what it feels like to go down an elevator at constant speed. In contrast, getting into the High Roller wheel is an excellent choice because your world-simulation can render the circular constant-speed projective trajectory of the camera movement in a very accurate and precise fashion.

This slideshow requires JavaScript.

Happiness and Harmony: A Marriage Made in Heaven

With regards to the connection between symmetry and pleasure, the free-wheeling hallucination state is a prime place to conduct high-quality phenomenological research. In particular, R tells us that you can study how different objects generate (are resonant with) particular moods, sounds, and tactile feelings. Thus, DXM + THC can be a tool of grand scientific significance for the study of emotional valence.

[If what follows is hard to make sense of, please bear with us, it will get easier:] R points out that one can climb the symmetry gradient within a scene by normalizing the space vertically, horizontally, depth-wise, temporally, weight-wise, and so on. During each amodal step you can try to re-align the projective lines so that the points at infinity are either precisely at the center of your vision, or at least have symmetrical counter-parts (e.g. left-right, top-down, etc.). This way you can take the energy of the phenomenal objects and generate what we call a “projective energy entrapment”. This is a strange thing to report on that goes outside of people’s conceptual schemes, but it is definitely real, and very important. With a symmetrified amodal projective frame, the phenomenal object can “lock in place”, and a process of annealing takes place. The pre-existing invariant degrees of freedom present in the amodal projections before the symmetrification are still there, but on top of them, one gets additional invariant degrees of freedom. R compares this to taking a space with affine geometry and projecting it into a space with Euclidean geometry, such that the space now has two layers of invariance that add up and, together, prevent energy in the phenomenal objects from escaping to their surroundings. In light of QRI’s meditation paradigms, this would be akin to removing energy sinks from the system. In turn, the projective energy entrapment allows the build-up of extraordinarily intense resonance amplitudes, and this, according to R, feels really good.

Thus, at least according to these observations, the emotional valence of our world-simulation is both related to the number of active invariant degrees of freedom along which transformations are taking place and the amount of energy entrapped in such spaces. Here is a good example. The gif on the left is what the space may look like at first, with the pseudo-time arrow creating a video feedback effect that gives rise to chaotic behavior. If you meditate on normalizing the projective points, you can turn that space into something akin to the image on the right. Now rather than having the modal energy leak to the surroundings, it all gets trapped inside the cube, giving rise to a powerful feeling of emotionally-loaded resonance, which empirically feels really good.

To reiterate, the image to the left is what it feels like when the space is asymmetrical. The affine symmetries may be preserved, but Euclidean energy entrapment is not possible there. The one on the right has both affine invariants and Euclidean invariants, which allow for more energy entrapment (and also temporal stability). If you can stabilize an infinite hall of mirrors projected from a highly symmetrical point of view, you will be able to entrap a lot of energy, which feels like a resonating space and empirically this seems to be very pleasant. We would love to hear from more rational psychonauts whether they are able to replicate this experience, and whether the correlation with valence is also uncovered in their world-simulations.

In the more general case, one can do this with other amodal projections, and generate peaceful but wide awake energy-filler mandalas of experience:

Indeed, during dissociative free-wheeling hallucinatory states one can “tune in to the phenomenal objects” one constructs and translate their vibration into sounds, tactile sensations, emotions, and even scents. Again, empirically, one will notice that the projective symmetry of an object will be associated with the symmetry of the multi-modal translation, and in turn, with the valence of the object. Ugly phenomenal objects will have discordant sound signatures, whereas smooth, easily compressible and symmetrical phenomenal objects will have harmonic sound signatures. You can indeed try to listen in to your entire experience, which in the general case will give rise to rather experimental-sounding music. If you have annealed your hallucinations into a highly symmetrical state (e.g. hall of mirrors above) the sound translation will be *very* pleasant, akin to Buddhist chants in a reverb chamber or mystical violins in a resonant cave (again, this is an empirical finding as reported from R, rather than just arm-chair speculation, but it does support QRI’s hypothesized Symmetry Theory of Valence). Two very symmetrical objects with different but nearby frequencies will clash at first, but one can try to resolve this clash by paying close attention to both at once. If so, sooner or later one of the objects will dominate the other, they will undergo affine transformations until they merge, or there will be some kind of compromise where you stop paying attention to some of the features of one or both of the objects such that the remaining ones are in harmony. Here are some more examples of what a more complex DXM + THC state may render that could produce extremely beautiful sounds:

Thus, even within our modern scientific paradigms (updated with QRI’s frameworks) we can explain and indeed utilize the phenomenological reports of heaven worlds. We believe that understanding the underlying mathematical basis of valence will be ground-breaking, and analyzing these reports is a really important step in this direction. It will allow us to make sense of the syntax of bliss, and thus aid us in the task of paradise engineering.

But What if Our Scientific World Picture is Wrong?

Indeed, maybe the picture of the world painted above is fundamentally mistaken. Within it, we would gather that heaven and spirituality are in a sense illusions caused by high valence states. The experience is so beautiful, sublime, and delightful that people try to make sense of it invoking God, the divine, and transcendent love. Alas, if the universe has an in-built utility function based on valence we could expect people to become confused that way when experiencing some of the really good states. But could we be mistaken here? This points to an important question: whether bliss is a spiritual phenomenon or whether spirituality is a bliss phenomenon.

Leibniz might retort by saying that although each person is a separate monad, God keeps their experiences synchronized so that our actions in Maya do have effects on other sentient beings, rather than being just a movie of one’s own making. Thus, a world-simulation model may be correct, but what makes an experience blissful or not is not its physical state, but rather its degree of spiritual wisdom. Or consider, for instance, the Tibetan Book of the Dead, along with its psychedelic interpretation (by Leary, Metzner, and Alpert), which suggests that both good and bad feelings are projections of the unconditioned mind.

Alas, even if a spiritual account turns out to be true, in the sense that interpreting these experiences through their lens is more accurate, and inferential realism about perception is misguided, we would still be able to gather from these reports that the structure of thought-forms encodes emotions – perhaps even in a spiritual multiverse with souls, God-energies, Realms, etc. we would still be able to derive a master equation for valence. Rather than valence structuralism referring to the fire in the equations of physics, it might refer to God-energy, emptiness, etheric fields, or whatnot. But why would that matter? We could still formalize and mathematize the nature of bliss under those conditions. This is what we call Spiritual Structuralism: even in the spirit world math still encodes the experiential quality of phenomena. Saying “we either live in a mathematically-describable physical world or in a mysteriously inscrutable spiritual world” may turn out to be a false dichotomy. We could still look forward to having spiritual analogues of Galileo, Newton, and Einstein, albeit their equations would apply to how the God-force self-interferes to generate the multifaceted spiritual world of sentient beings.

Alas, I suspect that many spiritual people would recoil at the prospect of mathematizing Mystical Love. So let us ask ourselves: Would this be good? Tongue-in-cheek, I remember asking God at Burning Man 2017 whether deriving the equation for valence would be good from a spiritual point of view. God’s response? A resounding “Yes!” Importantly, God emphasized that individual bliss is limited, whereas collective bliss is boundless.  But maybe this, too, could someday be formalized. Here I reproduce the relevant part of our conversation:

Me: I’ve been working on a theory concerning the nature of happiness. It’s an equation that takes brain states as measured with advanced brain imaging technology and delivers as an output a description of the overall valence (i.e. the pleasure-pain axis) of the mind associated to that brain. A lot of people seem very excited with this research, but there is also a minority of people for whom this is very unsettling. Namely, they tell me that reducing happiness to a mathematical equation would seem to destroy their sense of meaning. Do you have any thoughts on that?
God: I think that what you are doing is absolutely fantastic. I’ve been following your work and you are on the right track. That said, I would caution you not to get too caught up on individual bliss. I programmed the pleasure and pain centers in the animal brain in order to facilitate survival. I know that dying and suffering are extremely unpleasant, and until now that has been necessary to keep the whole system working. But humanity will soon enter a new stage of their evolution. Just remember that the highest levels of bliss are not hedonistic or selfish. They arise by creating a collective reality with other minds that fosters a deep existential understanding, that enables love, enhances harmony, and permits experimenting with radical self expression.
Me: Ah, that’s fascinating! Very reassuring. The equation I’m working on indeed has harmony at its core. I was worried that I would be accidentally doing something really wrong, you know? Reducing love to math.
God: Don’t worry, there is indeed a mathematical law beneath our feelings of love. It’s all encoded in the software of your reality, which we co-created over the last couple billion years. It’s great that you are trying to uncover such math, for it will unlock the next step in your evolution. Do continue making experiments and exploring various metaphysics, and don’t get caught up thinking you’ve found the answer. Trust me, the end is going to make all of the pain and suffering completely worth it. Have faith in love.
Me: Thank you!

– Conversation with God, Burning Man 2017

John C. Lilly’s Simulations of God posits that as people evolve and mature, their concept of the highest good also evolves and matures. Thus, learning about the math behind pleasure might very well transform your conception of divinity. I would therefore offer a new perspective on what is God that unifies bliss and spirituality: God is a happiness engineer who knows all the theories, tricks, and techniques to optimize qualia for bliss. Indeed, Romeo (from QRI) has reported experiencing multi-sensory harmonious mandalas when closing his eyes after coming back from long meditation retreats. One could very well posit that this world is the training ground of souls to learn how to avoid creating evil thought-forms while practicing how to increase the harmony between all sentient beings.

Additionally, Wireheading Done Right could have a spiritual analogue – namely, moving between realms in such a way that you go from good one to good one, reaping their functional benefits, while avoiding getting stuck in any one of them. Just because a spiritual universe underlies our reality would not mean that suffering forever is either good or necessary. And if there is math associated with love and liberation, we are all better for it, for then we are not shooting in the dark. Carefully selected dynamic systems equations that give rise to beautiful (i.e. high valence from many different points of view) patterns could very well be what is behind the bliss of heaven worlds, and you should neglect this discovery at your own peril.

Alas, this is unlikely. Even so, promoting a given ontology is less important, intrinsically, than promoting subjective wellbeing. For if someone’s delusions are comforting and do not interfere with their ability to help others, there is no reason to remove them. Do not go and try to convince your dying granny of the non-existence of God, for that is pointless cruelty. And certainly don’t go around talking about tenseless suffering in the Everettian multiverse outside of circles to whom this can either help them intrinsically or extrinsically (I assume being candid with QC readers is a net positive, though I often doubt that a bit myself).

Either way, from what I gather, practicing creating beautiful harmonic thought-forms is probably good for your health and happiness. It is likely to make you not only feel good, but also be sweet-natured. So by all means practice doing it as often as you can.

Infinite Bliss!


*”Mantras, the Sanskrit syllables inscribed on yantras, are essentially ‘thought forms’ representing divinities or cosmic powers, which exert their influence by means of sound-vibrations.” [source]

**”In 1991, Denise Caruso, writing a computer column for The San Francisco Examiner went to SIGGRAPH, the largest gathering of computer graphic professionals in the world. She conducted a survey; by the time she got back to San Francisco, she had talked to 180 professionals in the computer graphic field who had admitted taking psychedelics, and that psychedelics are important to their work; according to mathematician Ralph Abraham.” [source]