God and Open Individualism

by Roger Thisdell (context: I messaged Roger asking him about his thoughts on Open Individualism. A few days later he sent me this response. To get the most out of it, I recommend first reading our earlier text message exchange here: The Supreme State of Unconsciousness: Classical Enlightenment from the Point of View of Valence Structuralism)


Set-Up and Squaring Intuitions

There is a problem in philosophy of backwards rationalisation, where people feel intuitive pulls towards certain conclusions, and then try to justify why their intuition is correct. We can say this is putting the cart before the horse. If we are to philosophize well, we shouldn’t start with the conclusion. However, the pull to side with your intuitions is so incredibly crucial to decision-making that it basically can’t be ignored. In fact, at the heart of trying to know anything fundamentally hinges on a feeling quality of ‘this seems/feels right’ in relation to a proposition.

Now, this isn’t to say that all intuitions don’t have truth value, it’s just that we need to be subjectively sensitive to when we are totally being led by a feeling (which I think in many cases some philosophers aren’t aware). At the end of the day, we go off of whether an idea sits right with us at some particular level(s) of the mind, and all the justificatory attempts in favor of this idea serve to shift that feeling in us one way or the other.

Leading on to the discussion of identity: in a lot of thought experiments and attempts to understand where identity starts and stops we find an appeal to intuition. This is often done by conjuring up convoluted scenarios of teletransportation machines, or brain transplants, or Men-In-Black-style memory wipes and then reflecting on whether we feel that identity stayed the same or not. A good way to press people’s institutions is to get them to consider suffering, as personal identity is the great motivator of avoiding suffering (no self = no problem, as they say). Depending on where and at what time suffering is endured by which collection of atoms gets people to consider really fast and more confidently, say, where they think the bounds of identity lie.

Along with the epistemological problems of resting an argument on intuition or ‘gut feeling’ mentioned above, intuitions differ not just from person to person, but from moment to moment (in the same person). And if you haven’t become privy to how your intuitions can change, you may not question the truth value of the signal they are transmitting. So, I write this to highlight the problems of trying to solve identity issues by appealing to a felt-sense of where it lies.

Two Ways of Talking About the Self

Now I see an obvious split in how to approach this topic: 

(1) We can talk about identity as a raw experience – what in the experience space do I feel numerically identical to (one and the same as) – and in Buddhistic fashion forgo metaphysical claims here after. 

(2) Try to extrapolate beyond immediate experience and argue for a position of what the self is or how identity functions in a metaphysical sense. I call (2) the conceptual self as it is about the content within concepts you believe refers to you. 

To make this distinction clear I’ll give an example of a potential answer to (1) and then to (2). If asked: “What am I?” along the lines of (1) one may answer: “I feel like I am my thoughts.” – thoughts arise in experience and there is a fused impression of ‘me-ness’ to those thoughts. While (2) is concerned about the content of those thoughts and if asked: “What am I?” one may answer and even fervently believe: “I am a brain.” However, they don’t have any direct experience of being a brain – it is an extrapolation of ideas beyond direct phenomenological perception.

Sorry for all the set up! This is my framing and to give you the best response I needed to spell this out. Now, let me answer personally what I believe identity is in terms of (1) and then (2). However, (2) is informed by (1), and (1) is made sense of by (2); so although the distinction is very useful, like all separations, their boundaries seem to always breakdown – that there is a hint about my metaphysical beliefs.

Phenomenological Senses of Identity

For me, this has changed throughout the years as I’ve meditated more and more. I have shared these images with you before and they represent the transition of intuitions of personal identity throughout my journey.

They seem to match up quite nicely with how Frank Yang lays out his stages. Depending on which stage someone is in, we hear different metaphysical explanations of identity. (This is where (1) gets easily conflated with (2)).

How I’ve seen Frank spell out his stages (I realize neither of us came up with these on our own):

Image

When it comes to identifying with awareness (the second picture/stage) this is when you hear talk of the sort of there being one universal consciousness and that’s our true nature. When I was identifying with awareness, I could suddenly relate to what people meant by ‘we are all one universal consciousness’. However, I got the sense that people were failing to differentiate between something being numerically identical and qualitatively identical. When you become ‘aware of awareness’ there is a sense that this is a pristine dimension and is not personal. It doesn’t seem to belong to the notion of Roger (as it is perceived causally before the very idea of Roger), nor is it trademarked by Roger’s beliefs or memories. There is an insight that this perfectly equanimous layer of being is part of everyone’s experience, they just don’t see it. Yet it couldn’t be ruled out whether we are all in touch with the same one pure light of consciousness, or if each sentient organism has its own and our consciousnesses (plural) were just qualitatively the same. I think people often miss this distinction. 

Stage 2 does not obviously lead to open individualism yet. There is still a sense of the duality between the radiant awareness and everything else to be aware of.

Although, I think that anyone (even those without emptiness insights) could be talked into believing closed, open and empty individualism at a conceptual level, this doesn’t mean their phenomenological experience of identity would change, or would their instinctive, non-inquisitive gut-intuition on the subject.

I would hypothesize that those who have no insight into the 3 characteristic are intuitively most swayed by closed individualism. And those who have sufficient enough insight into impermanence (but not no-self) may intuitively side with empty individualism. And then with a deep enough insight into no-self, open individualism becomes a no-brainer.

Experiencing God (and a message to Leo)

At stage 3 is when open individualism is most likely to begin to intuitively feel right. This is also when talks of being God come out of people’s mouths and, as in terms of (1), they phenomenologically perceive the sense of ‘I’ in everything they experience, and they (2) conceptually infer there is just one thing, call it ‘God’. God is everything. I am everything. Because the understanding of moving from (1) to (2) (from experience to conjecture) is often lost on people, all kinds of wacky metaphysical beliefs come about – supposedly self-validating by higher consciousness or direct cosmic download.

While on stage 3, if you inject some metta into your experience space, you come to see what people mean when they say: “God is everywhere and all loving” or even: “God is love”. Having the feeling of being everything in your experience is like you don’t feel separate from anything, thus there is a deep intimacy with the world which construes love. You feel like you are the body, the thoughts, the emotions, the trees, the hills on the horizon, the air in between all of it, the sky and the awareness field which contains all these things. However, going from ‘the experience of feeling identical to everything you are aware of’ to ‘I am everything (even that which I’m not currently aware of) and therefore I am God/the universe’ requires an unfounded leap – which I admittedly made at some point.

I remember an incredibly stark moment I had when I was in stage 3, where being ‘God’ felt like the most real thing (I can sympathise a lot with where Leo Gura is coming from – though I think he’s lacking some phenomenological discernment). Because at stage 3 the sense of ’I’ is so prevalent, due to it being perceived everywhere in experience, I was investigating this quality a great deal. I was trying to distil the sense of ‘I’ down to its rawest form. “Yes, I feel identical to the trees and the sky and other people, but what is that common element that can be found in all these things which I call ‘I’?” After whittling away all the other unnecessary phenomenological baggage piled onto this ‘I’, I arrived at a clear perception of ‘I’ in its rawest form. The ‘I’ I call the epistemic agent, the pure sense of ‘a knower of experience’.

It became obvious that once the epistemic agent was singled out in experience that this perception of ‘I’ can only manifest in one way. What I mean by this is unlike with milk where the formula can be tainted slightly and result in versions of milk with slightly different colors, or tastes, or smells and yet they are all still milk, it is impossible for the epistemic agent to have a slightly different perceptual ‘flavor’ to it other than it does. This is because the qualia recipe only consists of one ingredient and if that’s missing or different, then it’s not the epistemic agent (the rawest sense of ‘I am’). Once I clocked this, I realized that all iterations of ‘I’ wherever and whenever, in all beings at all times, experience the sense of ‘I am’ exactly the same way. Then, and I remember this moment so clearly, it hit me: if God or the universe is self-aware – which it is just by dint of me being of the universe and self-aware – and has an experience of ‘I-ness’ then my experience of ‘I-ness’ in this relative body is the same as God’s and through a sharing of experience there is a direct link and so… ”Oh my god, I am God!”

(I am not suggesting that this line of reasoning is sound. It was simply the series of steps I went through which brought upon this profound experience). 

Again, the numerically versus qualitatively identical distinction could be parsed, however there is a way to get around this, for when you remove the sense of time and space from the equation then that difference collapses. To say that something is qualitatively identical to something else, but not numerically identical doesn’t make sense if two things can’t be differentiated by existing in separate moments of time or space. So in my “Oh my god, I am God!” epiphany, the sense of time and space had been shunned from attention and numerical identity was presumed.

I can imagine that someone has this epiphany moment as I did, but then when they return to a more ‘timey/spacey’ existence they retain credence in the belief that they are God and not just a single, distinct instance of experience of ‘I’ (which would be more of an empty individualist thought). They do this because they are basing their beliefs off of a very profound mind moment, even if the majority of their waking hours don’t suggest the same message.

If I could tell Leo Gura one thing it would be this: “Profundity does not equate to truth.” Just because something felt so real and epic, does not mean that experience is giving you the most accurate representation of greater reality. Truth be told at stage 3 I didn’t have anywhere near the attentional clarity, precision of view, and metacognitive abilities that came later; and so while I was having all these profound experiences I was not totally clued into the subtle ways I was manipulating my experience and was biased to certain perspectives, while overlooking certain things that became clearer to me later on.

Self, Not-self, and Neither Self nor Not-self

When it comes to personal identity, I want to distinguish three things the mind can do here:

  1. It can project a sense of self onto parts of experience – “I feel like I am this chair.” – said the man on salvia.
  1. It can project a sense of not-self onto parts of experience – “I don’t feel identical to that person over there.” – said sober Joe. I want to emphasize here that I don’t mean there is just a lack of ‘feeling’ associated with something, but rather there is an actual new ‘feeling’ of not identifying with something.

Stage 4 (my 4th picture) was living a life with the constant signal of ‘not me’ being coupled with everything I pointed my attention to. 

  1. It can stop projecting any sense of self and not-self – “I neither feel like I am everything, nor I’m not.” said Roger. Here, I mean the lack of projecting a sense of self and even a sense of not-self. 

To go into a little more detail on what is meant by 3: ‘Neither self, nor not self’… essentially there is just no transmission of data on this subject. No reading. When asked “What are you?” it’s like the question doesn’t even compute. Before, there were qualia indicators to be able to judge what is self and what is not-self. And now it’s like the mind pulls a blank. It is not because the answer is obvious that ‘I am everything’, or ‘I am nothing’. It’s almost a bit like asking a person who is blind from birth “Do you just see blackness?” – it can be really hard for sight-abled-people to get their head around the fact that some blind people don’t see anything at all (and what that really means). 4th path is akin to becoming blind to identity in a way. Although, I wasn’t identity blind from birth, memory of the qualia of ‘me-ness’ and ‘not me-ness’ is incredibly faded.*

*There is subtle nuance to get into with retaining semblances of individuality just to be able to function in the world.

The Ship of Theseus, Threshold Emptiness Insight and Losing the Ability to Buy into Nouns

At a certain point, once enough insight into emptiness was established, the ability to seriously believe in separate entities became near impossible. I remember with my beginner’s mind, closed individualism was the default position. And when nouns were comprehended, they were firmly believed to be distinct, real partitions in reality. “The world is made of things that are tables and things that are not.” (As if a table is an actual thing, lol). However, now I can never fully think that a table is anything more than a mind-made construct. It is perceived as so porous, airy, hollow…. empty. And this applies to all nouns: ‘atoms’, ‘being’ ‘non-being’, ‘life’, ‘death’, ‘mind’ and including the idea of ‘The Now’ (I’ll get into that later). 

One time in philosophy class we were going over the ‘paradox’ of The Ship of Theseus. People in my class had all kinds of differing intuition. Some said, ‘as soon as over 50% of the ship parts have been replaced then it’s a new/different ship’. Some said, ‘as soon as you replace one part of the ship it’s a new/different ship’. And others said, ‘as soon as one atom changes it’s a new/different ship’. They were going back and forth arguing about identity, which was the point of the class. And meanwhile the whole time I was thinking there is no ship of Theseus to begin with, there never was, it’s not a thing. And so there is no paradox. There is no conundrum to solve.

I had been reading ‘The Master and His Emissary: The Divided Brain’ at the time, and it occurred to me during the class that what I was witnessing were people with all very different brain chemistries and either left or right hemisphere biases, and this is what is leading them to different conclusions (me not being an exception) – the philosophical quibbling had little to do with it. (This is not to resort to any postmodernist conclusions. I do think some positions contain more truth signal than others.)

4th Path Putting the Nail in the Coffin for Empty Individualism?

There is no ‘now’, as there is not enough time for even a single isolated self to form. At 4th path insight into emptiness is so stark that you realise that to conceive of ‘The Now’ as a thing is wrong view. I used to experience things as arising and then a moment later passing; as manifesting and then slightly there after defabricating. But now I can see how phenomena are already disappearing the moment they are appearing. This leads to kinds of visions of super-positions – simultaneous 1 and 0. With such perception a ‘now’ as a moment can’t even consolidate – there truly is no ground for things to rest on.

Finally (2) My Conceptual Beliefs About Identity! (Prepare to be disappointed)

Keeping in mind what I said about ‘neither self nor not self’, when the intuition of personal identity is so lacking the question of ‘What is me and what is not me?’ just becomes ‘What does it mean for something to be its own individual entity?’ or even more simply ‘What exists?’. Does there exist one thing or more than one thing? And does it even make sense to consider there being ‘things’ (nouns) at all? 

(Take this next part as me applying a cosmic lens).

So, is there more than one thing? Engaging my scrupulous, philosophical, inquisitive mind, I can’t conceive of how there being more than one thing would be meaningful. But I don’t even really believe in things at all (if ‘thing’ is taken as a noun), so one thing isn’t quite getting at it either. There is something and it seems to be something so magical that it defies categorical comprehension. But the fact that there is change suggests this is not unitary, yet nor do I wish to say it is legion. Not noun, but verb? A process? But to where and how?

Heidegger often wrote in double negatives; I believe because when you construe something in the negative you bring to mind both the thing and its negative simultaneously. There is a greater potential for the mind to grasp a seeming paradox, but the conceptual mind can never fully do it, it can only approximate. Kierkegaard tried as he put it: “The self is a relation that relates itself to itself or is the relation’s relating itself to itself in the relation; the self is not the relation but is the relation’s relating itself to itself.” But words can only serve to point to something outside of their grasp.

This is why: 

The Toa that can be named is not the eternal Toa”

However, when I stop thinking (disengage the conceptual mind) and simply be, I get an intuitive sense of a super-position. Simultaneously, neither one nor many. Neither now nor not now. Neither existing nor not existing. Neither conscious nor not conscious. And this is apprehended in a way that is not confusing or jarring, but as the most sensible stance.

Still I have a sceptic bone in my body, and I am always open to being schooled. 

Halfway In, Halfway Out The Great Door of Being

Imagine a great conundrum that people have been debating over for centuries. “If a man is stepping through his front door and he has one foot in his house and one foot out of his house and his body is exactly in the middle, is he inside or outside?” People can’t seem to agree. Some say he is clearly inside because he is already under the door frame. Others say, he is still outside because he hasn’t fully entered his house yet. People squabble about whether it matters if he is coming or going. The real question is when he is exactly 50% in and exactly 50% out what is he? Inside or outside? The reason people can’t come down on a solid answer is because whenever they find someone passing through their front door the moment they go to make a judgement they miss that 50/50 moment and either witness him too early or too late at 60/40 or 40/60 in and out. In which case, they either decide he was definitely inside or definitely outside, accordingly. You have been trying to solve this issue too and feel like you have come close. One time you saw a guy in the act at 51/49 in and out. And then another time you saw a man who was 49/51 in and out. But no one ever is precise enough to make their judgement when he is exactly 50/50 in and out. Because true 50% in and 50% out hasn’t been witnessed, so people can only speculate that ‘well if we were to catch a man who was exactly at 50/50 in and out of his front door, we would conclude that maybe he was BOTH inside and outside.’ 

One day, it just so happens you see a man coming home from work. He’s approaching the front door, keys in hand. You’ve been practicing for this moment your whole life. Finally, are you going to be able to solve this great conundrum? He unlocks the door. He opens it. He steps through. And that was it! You witnessed it. You clearly clocked the 50/50 moment. 

“I saw it! I saw it!” you yell. Bystanders hear your cries and come up to you. 

“What did you see?” they ask. 

“I saw the precise moment he was exactly 50% in and 50% out!”

“Well…” they say “what was he, inside or outside then?”

And you respond “No”.

“Huh? Oh, you mean he was both inside and outside?”

“No” you say again.

“I don’t get it.” respond the bystanders. And in fact, you don’t even really get what you mean, because it doesn’t quite make sense to you either and yet it was as clear as day.

“He wasn’t inside or outside, because he simply vanished.”

That Time Daniel Dennett Took 200 Micrograms of LSD (In Another Timeline)

[Epistemic status: fiction]

Andrew Zuckerman messaged me:

Daniel Dennett admits that he has never used psychedelics! What percentage of functionalists are psychedelic-naïve? What percentage of qualia formalists are psychedelic-naïve? In this 2019 quote, he talks about his drug experience and also alludes to meme hazards (though he may not use that term!):

Yes, you put it well. It’s risky to subject your brain and body to unusual substances and stimuli, but any new challenge may prove very enlightening–and possibly therapeutic. There is only a difference in degree between being bumped from depression by a gorgeous summer day and being cured of depression by ingesting a drug of one sort or another. I expect we’ll learn a great deal in the near future about the modulating power of psychedelics. I also expect that we’ll have some scientific martyrs along the way–people who bravely but rashly do things to themselves that disable their minds in very unfortunate ways. I know of a few such cases, and these have made me quite cautious about self-experimentation, since I’m quite content with the mind I have–though I wish I were a better mathematician. Aside from alcohol, caffeine, nicotine and cannabis (which has little effect on me, so I don’t bother with it), I have avoided the mind-changing options. No LSD, no psilocybin or mescaline, though I’ve often been offered them, and none of the “hard” drugs.

 

As a philosopher, I have always accepted the possibility that the Athenians were right: Socrates was quite capable of corrupting the minds of those with whom he had dialogue. I don’t think he did any clear lasting harm, but it is certainly possible for a philosopher to seriously confuse an interlocutor or reader—to the point of mental illness or suicide, or other destructive behavior. Ideas can be just as dangerous as drugs.

 

Dennett Explained by Brendan Fleig-Goldstein and Daniel A. Friedman (2019)


It would be quite fascinating to know what Dan would say about lived psychedelic states. With that in mind, here is an essay prompt originally conceived for GPT-3 to satisfy our curiosity:

And after seeing some surprising empirical results with his heterophenomenological methods when examining the experience of people on psychedelics, Daniel Dennett decided to experience it for himself by taking 200 micrograms of LSD. The first thing he said to himself as he felt the first indications of the come-up was…


anders_and_maggie

Maggie and Anders

Maggie Wassinge and Anders Amelin (QRI Sweden volunteer coordinators and content contributors – see letters I & II, letters III, IV, V, & VI, and letters VII, VIII, & IX) decided to give it a go first. We shall then compare it with what GPT-3 can come up with at a later point.

In a way, what you are about to read can be thought of as Anders & Maggie’s simulation of a hyper-intelligent GPT-3-like system’s simulation of a hypothetical Daniel Dennett on LSD. So many layers, I know. But the results speak for themselves:


Dan says: Correlation is all that is needed. So what states a mind uses for modeling conditions of importance to it, is fundamentally arbitrary. Like with language. Words represent things to humans but there are many languages, and words do not resemble what they represent. They only correlate with what they represent, and the correlation needs to be learned. To do science about the mind, one does not need to attach any special weight to any conscious state of mind over any other. One person’s hope may be another person’s despair. These “qualia” are like internally projected hypo-languages which are different all over the map, if there were one. In fact there cannot be an interpersonal map of what things feel like personally. Only naïve people could hope for a fundamental linguistics of consciousness, as if these states were to constitute a universal language of some ground truth of the mind. Silly. Romantic. It’s all arbitrary. For the record, I never said subjective experiential states do not exist. People misrepresent me on that. What I am saying is that it will never be possible to scientifically map what the state that a word such as, for instance, “green”, translates to feels like by the brain of a given individual. My green could be your red.cielab-lch-color-tolerancing-methods

Just drop the whole idea of trying to map the state-space of qualia. That is my position. Or at least I know it is, logically. Right now I begin to notice how everything intensifies and becomes somehow more salient. More revealingly “real”. As I reflect on the notion of how “states” correlate, a humorous episode from my undergraduate student life so long ago, is brought to the surface. At Wesleyan it was, where I was taking a course in Art Appreciation. The lecturer was showing a slide of a still life. A bowl of fruit it was, conspicuously over-ripe. Pointing at one of the fruits, saying “Can anyone tell me what state this peach is in?” There was silence for about three seconds, then one student exclaimed: “Georgia”. Everyone laughed joyfully. Except me. I never quite liked puns. Too plebeian. Sense of humor is arbitrary. I believe that episode helped convince me that the mind is not mysterious after all. It is just a form of evolved spaghetti code finding arbitrary solutions to common problems. Much like adaptations of biochemistry in various species of life. The basic building blocks remain fixed as an operative system if you will, but what is constructed with it is arbitrary and only shaped by fitness proxies. Which are, again, nothing but correlations. I realized then that I’d be able to explain consciousness within a materialist paradigm without any mention of spirituality or new realms of physics. All talk of such is nonsense.Daniel_dennett_Oct2008

I have to say, however, that a remarkable transformation inside my mind is taking place as a result of this drug. I notice the way I now find puns quite funny. Fascinating. I also reflect on the fact that I find it fascinating that I find puns funny. It’s as if… I hesitate to think it even to myself, but there seems to be some extraordinarily strong illusion that “funny” and “fascinating” are in fact those very qualia states which… which cannot possibly be arbitrary. Although the reality of it has got to be that when I feel funniness or fascination, those are brain activity patterns unique to myself, not possible for me to relate to any other creature in the universe experiencing them the same way, or at least not to any non-human species. Not a single one would feel the same, I’m sure. Consider a raven, for example. It’s a bird that behaves socially intricately, makes plans for the next day, can grasp how tools are used, and excels at many other mental tasks even sometimes surpassing a chimpanzee. Yet a raven has a last common ancestor with humans more than three hundred million years ago. The separate genetic happenstances of evolution since then, coupled with the miniaturization pressure due to weight limitations on a flying creature, means that if I were to dissect and anatomically compare the brain of a raven and a human, I’d be at a total loss. Does the bird even have a cerebral cortex?03-brai-diagram

An out of character thing is happening to me. I begin to feel as if it were in fact likely that a raven does sense conscious states of “funny” and “fascinating”. I still have functioning logic that tells me it must be impossible. Certainly, it’s an intelligent creature. A raven is conscious, probably. Maybe the drug makes me exaggerate even that, but it ought to have a high likelihood of being the case. But the states of experience in a raven’s mind must be totally alien if it were possible to compare them side by side with those of a human, which of course it is not. The bird might as well come from another planet.Head_of_Raven

The psychedelic drug is having an emotional effect on me. It does not twist my logic, though. This makes for internal conflict. Oppositional suggestions spontaneously present themselves. Could there be at least some qualia properties which are universal? Or is every aspect arbitrary? If the states of the subjective are not epiphenomenal, there would be evolutionary selection pressures shaping them. Logically there should be differences in computational efficiency when the information encoded in qualia feeds back into actions carried out by the body that the mind controls. Or is it epiphenomenal after all? Well, there’s the hard problem. No use pondering that. It’s a drug effect. It’ll wear off. Funny thing though, I feel very, very happy. I’m wondering about valence. It now appeals strongly to take the cognitive leap that at least the positive/negative “axis” of experience may in fact be universal. A modifier of all conscious states, a kind of transform function. Even alien states could then have a “good or bad” quality to them. Not directly related to the cognitive power of intelligences, but used as an efficient guidance for agency by them all, from the humblest mite to the wisest philosopher. Nah. Romanticizing. Anthropomorphizing.

36766208_10160731731785637_6606215010454601728_oFurther into this “trip” now. Enjoying the ride. It’s not going to change my psyche permanently, so why not relax and let go? What if conscious mind states really do have different computational efficiency for various purposes? That would mean there is “ground truth” to be found about consciousness. But how does nature enable the process for “hitting” the efficient states? If that has been convergently perfected by evolution, conscious experience may be more universal than I used to take for granted. Without there being anything supernatural about it. Suppose the possibility space of all conscious states is very large, so that within it there is an ideally suited state for any mental task. No divine providence or intelligent design, just a law of large numbers.

The problem then is only a search algorithmic one, really. Suppose “fright” is a state ideally suited for avoiding danger. At least now, under the influence, fright strikes me as rather better for the purpose than attraction. Come to think of it, Toxoplasma Gondii has the ability to replace fright with attraction in mice with respect to cats. It works the same way in other mammals, too. Are things then not so arbitrarily organized in brains? Well, those are such basic states we’d share them with rodents presumably. Still can’t tell if fright feels like fear in a raven or octopus. But can it feel like attraction? Hmmm, these are just mind wanderings I go through while I wait for this drug to wear off. What’s the harm in it?

Suppose there is a most computationally efficient conscious state for a given mental task. I’d call that state the ground state of conscious intelligence with respect to that task. I’m thinking of it like mental physical chemistry. In that framework, a psychedelic drug would bring a mind to excited states. Those are states the mind has not practiced using for tasks it has learned to do before. The excited states can then be perceived as useless, for they perform worse at tasks one has previously become competent at while sober. Psychedelic states are excited with respect to previous mental tasks, but they would potentially be ground states for new tasks! It’s probably not initially evident exactly what those tasks are, but the great potential to in fact become more mentally able would be apparent to those who use psychedelics. Right now this stands out to me as absolutely crisp, clear and evident. And the sheer realness of the realization is earth-shaking. Too bad my career could not be improved by any new mental abilities.Touched_by_His_Noodly_Appendage_HD

Oh Spaghetti Monster, I’m really high now. I feel like the sober me is just so dull. Illusion, of course, but a wonderful one I’ll have to admit. My mind is taking off from the heavy drudgery of Earth and reaching into the heavens on the wings of Odin’s ravens, eternally open to new insights about life, the universe and everything. Seeking forever the question to the answer. I myself am the answer. Forty-two. I was born in nineteen forty two. The darkest year in human history. The year when Adolf Hitler looked unstoppable at destroying all human value in the entire world. Then I came into existence, and things started to improve.

It just struck me that a bird is a good example of embodied intelligence. Sensory input to the brain can produce lasting changes in the neural connectivity and so on, resulting in a saved mental map of that which precipitated the sensory input. Now, a bird has the advantage of flight. It can view things from the ground and from successively higher altitudes and remember the appearance of things on all these different scales. Plus it can move sideways large distances and find systematic changes over scales of horizontal navigation. Entire continents can be included in a bird’s area of potential interest. Continents and seasons. I’m curious if engineers will someday be able to copy the ability of birds into a flying robot. Maximizing computational efficiency. Human-level artificial intelligence I’m quite doubtful of, but maybe bird brains are within reach, though quite a challenge, too.

This GPT-3 system by OpenAI is pretty good for throwing up somewhat plausible suggestions for what someone might say in certain situations. Impressive for a purely lexical information processing system. It can be trained on pretty much any language. I wonder if it could become useful for formalizing those qualia ground states? The system itself is not an intelligence in the agency sense but it is a good predictor of states. Suppose it can model the way the mind of the bird cycles through all those mental maps the bird brain has in memory. Where the zooming in and out on different scales brings out different visual patterns. If aspects of patterns from one zoom level is combined with aspect from another zoom level, the result can be a smart conclusion about where and when to set off in what direction and with what aim. Then there can be combinations also with horizontally displaced maps and time-displaced maps. Essentially, to a computer scientist we are talking massively parallel processing through cycles of information compression and expansion with successive approximation combinations of pattern pieces from the various levels in rapid repetition until something leads to an action which becomes rewarded via a utility function maximization.

Integrated_information_theory_postulates

Axioms of Integrated Information Theory (IIT)

Thank goodness I’m keeping all this drugged handwaving to myself and not sharing it in the form of any trip report. I have a reputation for being down to Earth, and I wouldn’t want to spoil it. Flying with ravens, dear me. Privately it is quite fun right now, though. That cycling of mental maps, could it be compatible with the Integrated Information Theory? I don’t think Tononi’s people have gone into how an intelligent system would search qualia state-space and how it would find the task-specific ground states via successive approximations. Rapidly iterated cycling would bring in a dynamic aspect they haven’t gotten to, perhaps. I realize I haven’t read the latest from them. Was always a bit skeptical of the unwieldy mathematics they use. Back of the envelope here… if you replace the clunky “integration” with resonance, maybe there’s a continuum of amplitudes of consciousness intensity? Possibly with a threshold corresponding to IIT’s nonconscious feed-forward causation chains. The only thing straight from physics which would allow this, as far as I can tell from the basic mathematics of it, would be wave interference dynamics. If so, what property might valence correspond to? Indeed, be mappable to? For conscious minds, experiential valence is the closest one gets to updating on a utility function. Waves can interfere constructively and destructively. That gives us frequency-variable amplitude combinations, likely isomorphic with the experienced phenomenology and intensity of conscious states. Such as the enormous “realness” and “fantastic truth” I am now immersed in. Not sure if it’s even “I”. There is ego dissolution. It’s more like a free-floating cosmic revelation. Spectacular must be the mental task for which this state is the ground state!

Wave pattern variability is clearly not a bottleneck. Plotting graphs of frequencies and amplitudes for even simple interference patterns shows there’s a near-infinite space of distinct potential patterns to pick from. The operative system, that is evolution and development of nervous systems, must have been slow going to optimize by evolution via genetic selection early on in the history of life, but then it could go faster and faster. Let me see, humans acquired a huge redundancy of neocortex of the same type as animals use for avigation in spacetime locations. Hmmm…, that which the birds are so good at. Wonder if the same functionality in ravens also got increased in volume beyond what is needed for navigation? Opening up the possibility of using the brain to also “navigate” in social relational space or tool function space. Literally, these are “spaces” in the brain’s mental models.2000px-Migrationroutes.svg

Natural selection of genetics cannot have found the ground states for all the multiple tasks a human with our general intelligence is able to take on. Extra brain tissue is one thing it could produce, but the way that tissue gets efficiently used must be trained during life. Since the computational efficiency of the human brain is assessed to be near the theoretical maximum for the raw processing power it has available, inefficient information-encoding states really aren’t very likely to make up any major portion of our mental activity. Now, that’s a really strong constraint on mechanisms of consciousness there. If you don’t believe it was all magically designed by God, you’d have to find a plausible parsimonious mechanism for how the optimization takes place.

If valence is in the system as a basic property, then what can it be if it’s not amplitude? For things to work optimally, valence should in fact be orthogonal to amplitude. Let me see… What has a natural tendency to persist in evolving systems of wave interference? Playing around with some programs on my computer now… well, appears it’s consonance which continues and dissonance which dissipates. And noise which neutralizes. Hey, that’s even simple to remember: consonance continues, dissonance dissipates, noise neutralizes. Goodness, I feel like a hippie. Beads and Roman sandals won’t be seen. In Muskogee, Oklahoma, USA. Soon I’ll become convinced love’s got some cosmic ground state function, and that the multiverse is mind-like. Maybe it’s all in the vibes, actually. Spaghetti Monster, how silly that sounds. And at the same time, how true!

matthew_smith_65036312_10158707068303858_8051960337261395968_o

Artist: Matthew Smith

I’m now considering the brain to produce self-organizing ground state qualia selection via activity wave interference with dissonance gradient descent and consonance gradient ascent with ongoing information compression-expansion cycling and normalization via buildup of system fatigue. Wonder if it’s just me tripping, or if someone else might seriously be thinking along these lines. If so, what could make a catchy name for their model?

Maybe “Resonant State Selection Theory”? I only wish this could be true, for then it would be possible to unify empty individualism with open individualism in a framework of full empathic transparency. The major ground states for human intelligence could presumably be mapped pretty well with an impressive statistical analyzer like GPT-3. Mapping the universal ground truth of conscious intelligence, what a vision!

But, alas, the acid is beginning to wear off. Back to the good old opaque arbitrariness I’ve built my career on. No turning back now. I think it’s time for a cup of tea, and maybe a cracker to go with that.

raven-99568_960_720

On the Evolution of the Phenomenal Self (and Other Communications from QRI Sweden)

By Maggie Wassinge and Anders Amelin (QRI Sweden volunteer coordinators; see letters I & II, and letters III, IV, & V)


“QRI Law of Transhumanism”: The overall motivation of humans to solve social and mental problems will remain much higher than the motivation to solve physics problems. The human performance in solving social and mental problems will remain much lower than the performance in solving physics problems. This continues until social and mental problems become physics problems.

– Anders & Maggie


Letter VI: The Evolution of the Phenomenal Self

Re: Mini-Series on Open Individualism

A follow-up for the more nerdy audience could perhaps be how QRI seeks to resolve the confusion about individualism:

It often turns out that parsimony is a more useful guiding principle in science than naïve realism. This includes naïve realism about what constitutes parsimony. All relevant conditions must be taken into account, and some conditions are unknowns, which blurs the picture. Occam’s razor is powerful but more like a Samurai sword: you need great skill to use it well.magic-snake

Compare the state-space of consciousness with the state-space of chemistry known to humans: there is biochemistry and there is other chemistry. They manifest quite differently. However, parsimony favors that at the fundamental level of organization things reduce to a small set of rules which are the same for all of chemistry. This is now known to indeed be the case but was not always so. Rather, it tended to be assumed that some extra factor, a “life-force”, had to be involved when it comes to biochemistry.

DNA_Structure+Key+Labelled.pn_NoBBBiochemistry has been evolutionarily selected for performance on a most formidable problem. That of self-replicating a self-replicator. It takes a large number of steps in the process and high preciseness at each step. Only particular sequences of steps lead to normal cell function, and things are always open to getting corrupted. Take viruses, for instance.

Normal function of a brain is somewhat analogous to normal function of a cell. Evolution has selected for brains which produce the experience of continuity as a unique agent self. This is probably one of the harder tasks that conscious intelligence has solved, corresponding to the advanced parts necessary for reproduction in a cell. It is probably about as unusual in the state-space of consciousness as cellular replication is in the state-space of chemistry. However, the state naïvely feels like it is foundational to everything, which can make you confused when reflecting upon it. It can get even more confusing when you consider the strong possibility that valenced experiences of “good or bad” are much more commonplace in the state-space, perhaps more like transfer of electric charge is commonplace in chemistry.

4548499690_0c4987531d_b

Self-replicating a self-replicator

You can test this by altering (mental) system properties via meditation or psychedelics. Is “individuality” or “valence” more persistent under perturbation? It’s much harder to get rid of valence, and indeed, the highly altered state of a brain on high doses of 5-MeO-DMT gets rid of the agent self altogether but preserves and even enhances valence, interestingly more often in the positive than the negative direction. It’s like jumping from biochemistry to pyrotechnics.

xqscsoy

Self-less 5-MeO-DMT “void”: The state is as different and exotic from normal everyday evolved consciousness as the chemistry of explosive pyrotechnics is to evolved biochemistry.

Naïve realism would hold that the sensations of “one-ness” experienced in certain highly altered states of consciousness feel the way they do because they somehow expand to include other entities into a union with yourself. What is likely to really be going on could be the opposite: there is no “self” as a reality fundament but rather a local complex qualia construct that is easy to interfere with. When it (and other detail) goes away there is less mental model complexity left. A reduction in the information diversity of the experience. Take this far enough and you can get states like “X is love” where X could be anything. These can feel as if they reveal hidden truths, for you obviously had not thought that way before, right? “X is love, wow, what a cosmic connection!”


Letter VII: Fractional Crystallization to Enhance Qualia Diversity

Some more chemistry: is there in qualia state-space something analogous to fractional crystallization? When a magma solidifies relatively rapidly, most of the minor elements stay in solid solution within a few major mineral phases. You get a low diversity assemblage. When the magma solidifies slowly it can yield a continuum of various unique phases all the way down to compounds of elements that were only present at ppb levels in the bulk. Crucially, for this to work well, a powerful viscosity reducer is needed. Water happens to fit the bill perfectly.06400px-Fractional_crystallization.svg

Consider the computational performance of the process of solidification of a thousand cubic kilometer plutonic magma with and without an added cubic kilometer of water. The one with the added water functions as a dramatically more efficient sorting algorithm for the chemical element constituents than the dry one. The properties of minor minerals can be quite different from those of the major minerals. The spectrum of mineral physical and chemical properties that the magma solidification produces is greatly broadened by adding that small fraction of water. Which nature does on Earth.

It resembles the difference between narrow and broad intelligence. Now, since the general intelligence of humans requires multiple steps at multiple levels, which takes a lot of time, there might need to be some facilitator that plays the role water does in geology. Water tends to remain in liquid form all the way through crystallization, which compensates for the increase in viscosity that takes place on cooling, allowing fractional crystallization to go to completion in certain pegmatites.SnowflakesWilsonBentley

It seems that, in the brain, states become conscious once they “crystallize” into what an IIT-based model might describe as feedback loops. (Some physicalist model this as standing waves?). Each state could be viewed as analogous to a crystal belonging to a mineral family and existing somewhere on a composition spectrum. For each to crystallize as fast and distinctly as possible, there should be just the right amount of a water activity equivalent. Too much and things stay liquid, too little and no unique new states appear.download

It may perhaps be possible to tease out such “mental water” by analyzing brain scan data and comparing them with element fractionation models from geochemistry?

Eliezer Yudkowsky has pointed out that something that is not very high hanging must have upgraded the human brain so that it became able to make mental models of things no animal would (presumably) even begin to think of. Something where sheer size would not suffice as an explanation. It couldn’t be high hanging since the evolutionary search space available between early hominids and homo sapiens is small in terms of individuals, generations, and genetic variability. Could it be a single factor that does the job as crystallization facilitator to get the brain primed to produce a huge qualia range? For survival, the bulk of mental states would need to remain largely as they are in other animals, but with an added icing on the cake which turned out to confer a decisive strategic advantage.

It should be low hanging for AI developers, too, but in order to find it they may have to analyze models of qualia state-space and not just models of causal chains in network configurations…


Letter VIII: Tacking on the Winds of Valence

We just thought of something on the subjects of group intelligence and mental issues. Consider a possible QRI framing: valence realism is key to understanding all conscious agency. The psyche takes the experienced valence axis to be equal to “the truth” about the objects of attention which appear experientially together with states of valence. Moment to moment.

Realism coupled with parsimony means it is most likely not possible for a psyche to step outside their experience and override this function. (Leaving out the complication of non-conscious processes here for a moment). But of course learning does exist. Things in psyches can be re-trained within bounds which differ from psyche to psyche. New memories form and valence set-points become correspondingly adjusted.

Naïvely it can be believed that it is possible to go against negative valence. If you muster enough willpower, or some such. Like a sailboat moving against the wind by using an engine. But what if it’s a system which has to use the wind for everything? With tacking, you can use the wind to move against the wind. It’s more advanced, and only experienced sailors manage to do it optimally. Advanced psyches can couple expectations (strategic predictive modeling) with a high valence associated with the appropriate objects that correlate with strategic goals. If strong enough, such valence gives a net positive sum when coupled with unpleasant things which need to be “overcome” to reach strategic goals.1280px-Tacking.svg

You can “tack” in mental decision space. The expert psycho-mariner makes mental models of how the combinatorics of fractal valence plays out it in their own psyche and in others. Intra- and inter-domain valence summation modeling. Not quite there yet but the QRI is the group taking a systematic approach to it. We realize that’s what social superintelligences should converge towards. Experiential wellbeing and intelligence can be made to work perfectly in tandem for, in principle, arbitrarily large groups.

It is possible to make a model of negative valence states and render the model to appear in positive valence “lighting”. Sadism is possible, and self-destructive logic is possible. “I deserve to suffer so it is good that I suffer”. The valence is mixed but as long as the weighted sum is positive, agency moves in the destructive direction in these cases. Dysfunction can be complicated.

But on the bright side, a formalism that captures the valence summation well enough should be an excellent basis for ethics and for optimizing intelligences for both agency and wellbeing. This extends to group intelligences. The weight carried by various instantiations of positive and negative valence is then accessible for modeling and it is no longer necessary to consider it a moral imperative to want to destroy everything just to be on the safe side against any risk of negative experience taking place somewhere.

Magnetic_turbulence

Is it possible to tack on the winds of group valence?

At this early stage we are however faced with the problem of how influential premature conclusions of this type can be, and how much is too much. Certain areas in philosophy and ideology are, to most people, more immediately rewarding than science and engineering, and cheaper, too. But more gets done by a group of scientists who are philosophically inspired than by a group of philosophers who are scientifically inspired.

Could this be in the ballpark-ish?

Stay safe and symmetric!

– Maggie & Anders

Is the Orthogonality Thesis Defensible if We Assume Both Valence Realism and Open Individualism?

Ari Astra asks: Is the Orthogonality Thesis Defensible if We Assume Both “Valence Realism” and Open Individualism?


Ari’s own response: I suppose it’s contingent on whether or not digital zombies are capable of general intelligence, which is an open question. However, phenomenally bound subjective world simulations seem like an uncharacteristic extravagance on the part of evolution if non-sphexish p-zombie general intelligence is possible.

Of course, it may be possible, but just not reachable through Darwinian selection. But the fact that a search process as huge as evolution couldn’t find it and instead developed profoundly sophisticated phenomenally bound subjectivity is (possibly strong) evidence against the proposition that zombie AGI is possible (or likely to be stumbled on by accident).

If we do need phenomenally bound subjectivity for non-sphexish intelligence and minds ultimately care about qualia valence – and confusedly think that they care about other things only when they’re below a certain intelligence (or thoughtfulness) level – then it seems to follow that smarter than human AGIs will converge on valence optimization.

If OI is also true, then smarter than human AGIs will likely converge on this as well – since it’s within the reach of smart humans – and this will plausibly lead to AGIs adopting sentience in general as their target for valence optimization.

Friendliness may be built into the nature of all sufficiently smart and thoughtful general intelligence.

If we’re not drug-naive and we’ve conducted the phenomenological experiment of chemically blowing open the reducing valves that keep “mind at large” out and that filteratively shape hominid consciousness, we know by direct acquaintance that it’s possible to hack our way to more expansive awareness.

We shouldn’t discount the possibility that AGI will do the same simply because the idea is weirdly genre bending. Whatever narrow experience of “self” AGI starts with in the beginning, it may quickly expand out of.


Michael E. Johnson‘s response: The orthogonality thesis seems sound from ‘far mode’ but always breaks down in ‘near mode’. One way it breaks down is in implementation: the way you build an AGI system will definitely influence what it tends to ‘want’. Orthogonality is a leaky abstraction in this case.

Another way it breaks down is that the nature and structure of the universe instantiates various Schelling points. As you note, if Valence Realism is true, then there exists a pretty big Schelling point around optimizing that. Any arbitrary AGI would be much more likely to optimize for (and coordinate around) optimizing for positive qualia than, say, paperclips. I think this may be what your question gets at.

Coordination is also a huge question. You may have read this already, but worth pointing to: A new theory of Open Individualism.

To collect some threads- I’d suggest that much of the future will be determined by the coordination capacity and game-theoretical equilibriums between (1) different theories of identity, and (2) different metaphysics.

What does ‘metaphysics’ mean here? I use ‘metaphysics’ as shorthand for ‘the ontology people believe is ‘real’. What they believe we should look at when determining moral action.’

The cleanest typology for metaphysics I can offer is: some theories focus on computations as the thing that’s ‘real’, the thing that ethically matters – we should pay attention to what the *bits* are doing. Others focus on physical states – we should pay attention to what the *atoms* are doing. I’m on team atoms, as I note here: Against Functionalism.

My suggested takeaway: an open individualist who assumes computationalism is true (team bits) will have a hard time coordinating with an open individualist who assumes physicalism is true (team atoms) — they’re essentially running incompatible versions of OI and will compete for resources. As a first approximation, instead of three theories of personal identity – Closed Individualism, Empty Individualism, Open Individualism – we’d have six. CI-bits, CI-atoms, EI-bits, EI-atoms, OI-bits, OI-atoms. Whether the future is positive will be substantially determined by how widely and deeply we can build positive-sum moral trades between these six frames.

Maybe there’s further structure, if we add the dimension of ‘yes/no’ on Valence Realism. But maybe not– my intuition is that ‘team bits’ trends toward not being valence realists, whereas ‘team atoms’ tends toward it. So we’d still have these core six.

(I believe OI-atoms or EI-atoms is the ‘most true’ theory of personal identity, and that upon reflection and under consistency constraints agents will converge to these theories at the limit, but I expect all six theories to be well-represented by various agents and pseudo-agents in our current and foreseeable technological society.)

AI Alignment Podcast: On Consciousness, Qualia, and Meaning with Mike Johnson and Andrés Gómez Emilsson

Lucas Perry from the Future of Life Institute recently interviewed my co-founder Mike Johnson and I in his AI Alignment podcast. Here is the full transcript:


Lucas: Hey, everyone. Welcome back to the AI Alignment Podcast. I’m Lucas Perry, and today we’ll be speaking with Andrés Gomez Emilsson and Mike Johnson from the Qualia Research Institute. In this episode, we discuss the Qualia Research Institute’s mission and core philosophy. We get into the differences between and arguments for and against functionalism and qualia realism. We discuss definitions of consciousness, how consciousness might be causal, we explore Marr’s Levels of Analysis, we discuss the Symmetry Theory of Valence. We also get into identity and consciousness, and the world, the is-out problem, what this all means for AI alignment and building beautiful futures.

And then end on some fun bits, exploring the potentially large amounts of qualia hidden away in cosmological events, and whether or not our universe is something more like heaven or hell. And remember, if you find this podcast interesting or useful, remember to like, comment, subscribe, and follow us on your preferred listening platform. You can continue to help make this podcast better by participating in a very short survey linked in the description of wherever you might find this podcast. It really helps. Andrés is a consciousness researcher at QRI and is also the Co-founder and President of the Stanford Transhumanist Association. He has a Master’s in Computational Psychology from Stanford. Mike is Executive Director at QRI and is also a co-founder.

He is interested in neuroscience, philosophy of mind, and complexity theory. And so, without further ado, I give you Mike Johnson and Andrés Gomez Emilsson. So, Mike and Andrés, thank you so much for coming on. Really excited about this conversation and there’s definitely a ton for us to get into here.

Andrés: Thank you so much for having us. It’s a pleasure.

Mike: Yeah, glad to be here.

Lucas: Let’s start off just talking to provide some background about the Qualia Research Institute. If you guys could explain a little bit, your perspective of the mission and base philosophy and vision that you guys have at QRI. If you could share that, that would be great.

Andrés: Yeah, for sure. I think one important point is there’s some people that think that really what matters might have to do with performing particular types of algorithms, or achieving external goals in the world. Broadly speaking, we tend to focus on experience as the source of value, and if you assume that experience is a source of value, then really mapping out what is the set of possible experiences, what are their computational properties, and above all, how good or bad they feel seems like an ethical and theoretical priority to actually make progress on how to systematically figure out what it is that we should be doing.

Mike: I’ll just add to that, this thing called consciousness seems pretty confusing and strange. We think of it as pre-paradigmatic, much like alchemy. Our vision for what we’re doing is to systematize it and to do to consciousness research what chemistry did to alchemy.

Lucas: To sort of summarize this, you guys are attempting to be very clear about phenomenology. You want to provide a formal structure for understanding and also being able to infer phenomenological states in people. So you guys are realists about consciousness?

Mike: Yes, absolutely.

Lucas: Let’s go ahead and lay some conceptual foundations. On your website, you guys describe QRI’s full stack, so the kinds of metaphysical and philosophical assumptions that you guys are holding to while you’re on this endeavor to mathematically capture consciousness.

Mike: I would say ‘full stack’ talks about how we do philosophy of mind, we do neuroscience, and we’re just getting into neurotechnology with the thought that yeah, if you have a better theory of consciousness, you should be able to have a better theory about the brain. And if you have a better theory about the brain, you should be able to build cooler stuff than you could otherwise. But starting with the philosophy, there’s this conception of qualia of formalism; the idea that phenomenology can be precisely represented mathematically. You borrow the goal from Giulio Tononi’s IIT. We don’t necessarily agree with the specific math involved, but the goal of constructing a mathematical object that is isomorphic to a systems phenomenology would be the correct approach if you want to formalize phenomenology.

And then from there, one of the big questions in how you even start is, what’s the simplest starting point? And here, I think one of our big innovations that is not seen at any other research group is we’ve started with emotional valence and pleasure. We think these are not only very ethically important, but also just literally the easiest place to start reverse engineering.

Lucas: Right, and so this view is also colored by physicalism and quality of structuralism and valence realism. Could you explain some of those things in a non-jargony way?

Mike: Sure. Quality of formalism is this idea that math is the right language to talk about qualia in, and that we can get a precise answer. This is another way of saying that we’re realists about consciousness much as people can be realists about electromagnetism. We’re also valence realists. This refers to how we believe emotional valence, or pain and pleasure, the goodness or badness of an experience. We think this is a natural kind. This concept carves reality at the joints. We have some further thoughts on how to define this mathematically as well.

Lucas: So you guys are physicalists, so you think that basically the causal structure of the world is best understood by physics and that consciousness was always part of the game engine of the universe from the beginning. Ontologically, it was basic and always there in the same sense that the other forces of nature were already in the game engine since the beginning?

Mike: Yeah, I would say so. I personally like the frame of dual aspect monism, but I would also step back a little bit and say there’s two attractors in this discussion. One is the physicalist attractor, and that’s QRI. Another would be the functionalist/computationalist attractor. I think a lot of AI researchers are in this attractor and this is a pretty deep question of, if we want to try to understand what value is, or what’s really going on, or if we want to try to reverse engineer phenomenology, do we pay attention to bits or atoms? What’s more real; bits or atoms?

Lucas: That’s an excellent question. Scientific reductionism here I think is very interesting. Could you guys go ahead and unpack though the skeptics position of your view and broadly adjudicate the merits of each view?

Andrés: Maybe a really important frame here is called Marr’s Levels of Analyses. David Marr was a cognitive scientist, wrote a really influential book in the ’80s called On Vision where he basically creates a schema for how to understand knowledge about, in this particular case, how you actually make sense of the world visually. The framework goes as follows: you have three ways in which you can describe an information processing system. First of all, the computational/behavioral level. What that is about is understanding the input-output mapping of an information processing system. Part of it is also understanding the run-time complexity of the system and under what conditions it’s able to perform its actions. Here an analogy would be with an abacus, for example.

On the computational/behavioral level, what an abacus can do is add, subtract, multiply, divide, and if you’re really creative you can also exponentiate and do other interesting things. Then you have the algorithmic level of analysis, which is a little bit more detailed, and in a sense more constrained. What the algorithm level of analysis is about is figuring out what are the internal representations and possible manipulations of those representations such that you get the input output of mapping described by the first layer. Here you have an interesting relationship where understanding the first layer doesn’t fully constrain the second one. That is to say, there are many systems that have the same input output mapping but that under the hood uses different algorithms.

In the case of the abacus, an algorithm might be something whenever you want to add a number you just push a bead. Whenever you’re done with a row, you push all of the beads backs and then you add a bead in the row underneath. And finally, you have the implementation level of analysis, and that is, what is the system actually made of? How is it constructed? All of these different levels ultimately also map onto different theories of consciousness, and that is basically where in the stack you associate consciousness, or being, or “what matters”. So, for example, behaviorists in the ’50s, they may associate consciousness, if they give any credibility to that term, with the behavioral level. They don’t really care what’s happening inside as long as you have extended pattern of reinforcement learning over many iterations.

What matters is basically how you’re behaving and that’s the crux of who you are. A functionalist will actually care about what algorithms you’re running, how is it that you’re actually transforming the input into the output. Functionalists generally do care about, for example, brain imaging, they do care about the high level algorithms that the brain is running, and generally will be very interested in figuring out these algorithms and generalize them in fields like machine learning and digital neural networks and so on. A physicalist associate consciousness at the implementation level of analysis. How the system is physically constructed, has bearings on what is it like to be that system.

Lucas: So, you guys haven’t said that this was your favorite approach, but if people are familiar with David Chalmers, these seem to be the easy problems, right? And functionalists are interested in just the easy problems and some of them will actually just try to explain consciousness away, right?

Mike: Yeah, I would say so. And I think to try to condense some of the criticism we have of functionalism, I would claim that it looks like a theory of consciousness and can feel like a theory of consciousness, but it may not actually do what we need a theory of consciousness to do; specify which exact phenomenological states are present.

Lucas: Is there not some conceptual partitioning that we need to do between functionalists who believe in qualia or consciousness, and those that are illusionists or want to explain it away or think that it’s a myth?

Mike: I think that there is that partition, and I guess there is a question of how principled the partition you can be, or whether if you chase the ideas down as far as you can, the partition collapses. Either consciousness is a thing that is real in some fundamental sense and I think you can get there with physicalism, or consciousness is more of a process, a leaky abstraction. I think functionalism naturally tugs in that direction. For example, Brian Tomasik has followed this line of reasoning and come to the conclusion of analytic functionalism, which is trying to explain away consciousness.

Lucas: What is your guys’s working definition of consciousness and what does it mean to say that consciousness is real.

Mike: It is a word that’s overloaded. It’s used in many contexts. I would frame it as what it feels like to be something, and something is conscious if there is something it feels like to be that thing.

Andrés: It’s important also to highlight some of its properties. As Mike pointed out, “consciousness” is used in many different ways. There’s like eight definitions for the word consciousness, and honestly, all of them are really interesting. Some of them are more fundamental than others and we tend to focus on the more fundamental side of the spectrum for the word. A sense that would be very not fundamental would be consciousness in the sense of social awareness or something like that. We actually think of consciousness much more in terms of qualia; what is it like to be something? What is it like to exist? Some of the key properties of consciousness are as follows: First of all, we do think it exists.

Second, in some sense it has causal power in the sense that the fact that we are conscious matters for evolution, evolution made us conscious for a reason that it’s actually doing some computational legwork that would be maybe possible to do, but just not as efficient or not as conveniently as it is possible with consciousness. Then also you have the property of qualia, the fact that we can experience sights, and colors, and tactile sensations, and thoughts experiences, and emotions, and so on, and all of these are in completely different worlds, and in a sense they are, but they have the property that they can be part of a unified experience that can experience color at the same time as experiencing sound. That sends those different types of sensations, we describe them as the category of consciousness because they can be experienced together.

And finally, you have unity, the fact that you have the capability of experiencing many qualia simultaneously. That’s generally a very strong claim to make, but we think you need to acknowledge and take seriously its unity.

Lucas: What are your guys’s intuition pumps for thinking why consciousness exists as a thing? Why is there a qualia?

Andrés: There’s the metaphysical question of why consciousness exists to begin within. That’s something I would like to punt for the time being. There’s also the question of why was it recruited for information processing purposes in animals? The intuition here is that there are various contrasts that you can have within experience, which can serve a computational role. So, there may be a very deep reason why color qualia or visual qualia is used for information processing associated with sight, and why tactile qualia is associated with information processing useful for touching and making haptic representations, and that might have to do with the actual map of how all the qualia values are related to each other. Obviously, you have all of these edge cases, people who are seeing synesthetic.

They may open their eyes and they experience sounds associated with colors, and people tend to think of those as abnormal. I would flip it around and say that we are all synesthetic, it’s just that the synesthesia that we have in general is very evolutionarily adaptive. The reason why you experience colors when you open your eyes is that that type of qualia is really well suited to represent geometrically a projective space. That’s something that naturally comes out of representing the world with the sensory apparatus like eyes. That doesn’t mean that there aren’t other ways of doing it. It’s possible that you could have an offshoot of humans that whenever they opened their eyes, they experience sound and they use that very well to represent the visual world.

But we may very well be in a local maxima of how different types of qualia are used to represent and do certain types of computations in a very well-suited way. It’s like the intuition behind why we’re conscious, is that all of these different contrasts in the structure of the relationship of possible qualia values has computational implications, and there’s actual ways of using this contrast in very computationally effective ways.

Lucas: So, just to channel the functionalist here, wouldn’t he just say that everything you just said about qualia could be fully reducible to input output and algorithmic information processing? So, why do we need this extra property of qualia?

Andrés: There’s this article, I believe is by Brian Tomasik that basically says, flavors of consciousness are flavors of computation. It might be very useful to do that exercise, where basically you identify color qualia as just a certain type of computation and it may very well be that the geometric structure of color is actually just a particular algorithmic structure, that whenever you have a particular type of algorithmic information processing, you get these geometric state-space. In the case of color, that’s a Euclidean three-dimensional space. In the case of tactile or smell qualia, it might be a much more complicated space, but then it’s in a sense implied by the algorithms that we run. There is a number of good arguments there.

The general approach to how to tackle them is that when it comes down to actually defining what algorithms a given system is running, you will hit a wall when you try to formalize exactly how to do it. So, one example is, how do you determine the scope of an algorithm? When you’re analyzing a physical system and you’re trying to identify what algorithm it is running, are you allowed to basically contemplate 1,000 atoms? Are you allowed to contemplate a million atoms? Where is a natural boundary for you to say, “Whatever is inside here can be part of the same algorithm, but whatever is outside of it can’t.” And, there really isn’t a frame-invariant way of making those decisions. On the other hand, if you ask to see a qualia with actual physical states, there is a frame-invariant way of describing what the system is.

Mike: So, a couple of years ago I posted a piece giving a critique of functionalism and one of the examples that I brought up was, if I have a bag of popcorn and I shake the bag of popcorn, did I just torture someone? Did I just run a whole brain emulation of some horrible experience, or did I not? There’s not really an objective way to determine which algorithms a physical system is objectively running. So this is a kind of an unanswerable question from the perspective of functionalism, whereas with the physical theory of consciousness, it would have a clear answer.

Andrés: Another metaphor here is, let’s say you’re at a park enjoying an ice cream. In this system that I created that has, let’s say isomorphic algorithms to whatever is going on in your brain, the particular algorithms that your brain is running in that precise moment within a functionalist paradigm maps onto a metal ball rolling down one of the paths within these machine in a straight line, not touching anything else. So there’s actually not much going on. According to functionalism, that would have to be equivalent and it would actually be generating your experience. Now the weird thing there is that you could actually break the machine, you could do a lot of things and the behavior of the ball would not change.

Meaning that within functionalism, and to actually understand what a system is doing, you need to understand the counter-factuals of the system. You need to understand, what would the system be doing if the input had been different? And all of a sudden, you end with this very, very gnarly problem of defining, well, how do you actually objectively decide what is the boundary of the system? Even some of these particular states that allegedly are very complicated, the system looks extremely simple, and you can remove a lot of parts without actually modifying its behavior. Then that casts in question whether there is an objective boundary, any known arbitrary boundary that you can draw around the system and say, “Yeah, this is equivalent to what’s going on in your brain,” right now.

This has a very heavy bearing on the binding problem. The binding problem for those who haven’t heard of it is basically, how is it possible that 100 billion neurons just because they’re skull-bound, spatially distributed, how is it possible that they simultaneously contribute to a unified experience as opposed to, for example, neurons in your brain and neurons in my brain contributing to a unified experience? You hit a lot of problems like what is the speed of propagation of information for different states within the brain? I’ll leave it at that for the time being.

Lucas: I would just like to be careful about this intuition here that experience is unified. I think that the intuition pump for that is direct phenomenological experience like experience seems unified, but experience also seems a lot of different ways that aren’t necessarily descriptive of reality, right?

Andrés: You can think of it as different levels of sophistication, where you may start out with a very naive understanding of the world, where you confuse your experience for the world itself. A very large percentage of people perceive the world and in a sense think that they are experiencing the world directly, whereas all the evidence indicates that actually you’re experiencing an internal representation. You can go and dream, you can hallucinate, you can enter interesting meditative states, and those don’t map to external states of the world.

There’s this transition that happens when you realize that in some sense you’re experiencing a world simulation created by your brain, and of course, you’re fooled by it in countless ways, especially when it comes to emotional things that we look at a person and we might have an intuition of what type of person that person is, and that if we’re not careful, we can confuse our intuition, we can confuse our feelings with truth as if we were actually able to sense their souls, so to speak, rather than, “Hey, I’m running some complicated models on people-space and trying to carve out who they are.” There’s definitely a lot of ways in which experience is very deceptive, but here I would actually make an important distinction.

When it comes to intentional content, and intentional content is basically what the experience is about, for example, if you’re looking at a chair, there’s the quality of chairness, the fact that you understand the meaning of chair and so on. That is usually a very deceptive part of experience. There’s another way of looking at experience that I would say is not deceptive, which is the phenomenal character of experience; how it presents itself. You can be deceived about basically what the experience is about, but you cannot be deceived about how you’re having the experience, how you’re experiencing it. You can infer based on a number of experiences that the only way for you to even actually experience a given phenomenal object is to incorporate a lot of that information into a unified representation.

But also, if you just pay attention to your experience that you can simultaneously place your attention in two spots of your visual field and make them harmonized. That’s a phenomenal character and I would say that there’s a strong case to be made to not doubt that property.

Lucas: I’m trying to do my best to channel the functionalist. I think he or she would say, “Okay, so what? That’s just more information processing, and i’ll bite the bullet on the binding problem. I still need some more time to figure that out. So what? It seems like these people who believe in qualia have an even tougher job of trying to explain this extra spooky quality in the world that’s different from all the other physical phenomenon that science has gone into.” It also seems to violate Occam’s razor or a principle of lightness where one’s metaphysics or ontology would want to assume the least amount of extra properties or entities in order to try to explain the world. I’m just really trying to tease out your best arguments here for qualia realism as we do have this current state of things in AI alignment where most people it seems would either try to explain away consciousness, would say it’s an illusion, or they’re anti-realist about qualia.

Mike: That’s a really good question, a really good frame. And I would say our strongest argument revolves around predictive power. Just like centuries ago, you could absolutely be a skeptic about, shall we say, electromagnetism realism. And you could say, “Yeah, I mean there is this thing we call static, and there’s this thing we call lightning, and there’s this thing we call load stones or magnets, but all these things are distinct. And to think that there’s some unifying frame, some deep structure of the universe that would tie all these things together and highly compress these phenomenon, that’s crazy talk.” And so, this is a viable position today to say that about consciousness, that it’s not yet clear whether consciousness has deep structure, but we’re assuming it does, and we think that unlocks a lot of predictive power.

We should be able to make predictions that are both more concise and compressed and crisp than others, and we should be able to make predictions that no one else can.

Lucas: So what is the most powerful here about what you guys are doing? Is it the specific theories and assumptions which you take are falsifiable?

Mike: Yeah.

Lucas: If we can make predictive assessments of these things, which are either leaky abstractions or are qualia, how would we even then be able to arrive at a realist or anti-realist view about qualia?

Mike: So, one frame on this is, it could be that one could explain a lot of things about observed behavior and implicit phenomenology through a purely functionalist or computationalist lens, but maybe for a given system it might take 10 terabytes. And if you can get there in a much simpler way, if you can explain it in terms of three elegant equations instead of 10 terabytes, then it wouldn’t be proof that there exists some crystal clear deep structure at work. But it would be very suggestive. Marr’s Levels of Analysis are pretty helpful here, where a functionalist might actually be very skeptical of consciousness mattering at all because it would say, “Hey, if you’re identifying consciousness at the implementation level of analysis, how could that have any bearing on how we are talking about, how we understand the world, how we’d behave?

Since the implementational level is kind of epiphenomenal from the point of view of the algorithm. How can an algorithm know its own implementation, all it can maybe figure out its own algorithm, and it’s identity would be constrained to its own algorithmic structure.” But that’s not quite true. In fact, there is bearings on one level of analysis onto another, meaning in some cases the implementation level of analysis doesn’t actually matter for the algorithm, but in some cases it does. So, if you were implementing a computer, let’s say with water, you have the option of maybe implementing a Turing machine with water buckets and in that case, okay, the implementation level of analysis goes out the window in terms of it doesn’t really help you understand the algorithm.

But if how you’re using water to implement algorithms is by basically creating this system of adding waves in buckets of different shapes, with different resonant modes, then the implementation level of analysis actually matters a whole lot for what algorithms are … finely tuned to be very effective in that substrate. In the case of consciousness and how we behave, we do think properties of the substrate have a lot of bearings on what algorithms we actually run. A functionalist should actually start caring about consciousness if the properties of consciousness makes the algorithms more efficient, more powerful.

Lucas: But what if qualia and consciousness are substantive real things? What if the epiphenomenonalist true and is like smoke rising from computation and it doesn’t have any causal efficacy?

Mike: To offer a re-frame on this, I like this frame of dual aspect monism better. There seems to be an implicit value judgment on epiphenomenalism. It’s seen as this very bad thing if a theory implies qualia as epiphenomenal. Just to put cards on the table, I think Andrés and I differ a little bit on how we see these things, although I think our ideas also mesh up well. But I would say that under the frame of something like dual aspect monism, that there’s actually one thing that exists, and it has two projections or shadows. And one projection is the physical world such as we can tell, and then the other projection is phenomenology, subjective experience. These are just two sides of the same coin and neither is epiphenomenal to the other. It’s literally just two different angles on the same thing.

And in that sense, qualia values and physical values are really talking about the same thing when you get down to it.

Lucas: Okay. So does this all begin with this move that Descartes makes, where he tries to produce a perfectly rational philosophy or worldview by making no assumptions and then starting with experience? Is this the kind of thing that you guys are doing in taking consciousness or qualia to be something real or serious?

Mike: I can just speak for myself here, but I would say my intuition comes from two places. One is staring deep into the beast of functionalism and realizing that it doesn’t lead to a clear answer. My model is that it just is this thing that looks like an answer but can never even in theory be an answer to how consciousness works. And if we deny consciousness, then we’re left in a tricky place with ethics and moral value. It also seems to leave value on the table in terms of predictions, that if we can assume consciousness as real and make better predictions, then that’s evidence that we should do that.

Lucas: Isn’t that just an argument that it would be potentially epistemically useful for ethics if we could have predictive power about consciousness?

Mike: Yeah. So, let’s assume that it’s 100 years, or 500 years, or 1,000 years in the future, and we’ve finally cracked consciousness. We’ve finally solved it. My open question is, what does the solution look like? If we’re functionalists, what does the solution look like? If we’re physicalists, what does the solution look like? And we can expand this to ethics as well.

Lucas: Just as a conceptual clarification, the functionalists are also physicalists though, right?

Andrés: There is two senses of the word physicalism here. So if there’s physicalism in the sense of like a theory of the universe, that the behavior of matter and energy, what happens in the universe is exhaustively described by the laws of physics, or future physics, there is also physicalism in the sense of understanding consciousness in contrast to functionalism. David Pearce, I think, would describe it as non-materialist physicalist idealism. There’s definitely a very close relationship between that phrasing and dual aspect monism. I can briefly unpack it. Basically non materialist is not saying that the stuff of the world is fundamentally unconscious. That’s something that materialism claims, that what the world is made of is not conscious, is raw matter so to speak.

Andrés: Physicalist, again in the sense of the laws of physics exhaustively describe behavior and idealist in the sense of what makes up the world is qualia or consciousness. The big picture view is that the actual substrate of the universe of quantum fields are fields of qualia.

Lucas: So Mike, you were saying that in the future when we potentially have a solution to the problem of consciousness, that in the end, the functionalists with algorithms and explanations of say all of the easy problems, all of the mechanisms behind the things that we call consciousness, you think that that project will ultimately fail?

Mike: I do believe that, and I guess my gentle challenge to functionalists would be to sketch out a vision of what a satisfying answer to consciousness would be, whether it’s completely explaining it a way or completely explaining it. If in 500 years you go to the local bookstore and you check out consciousness 101, and just flip through it, you look at the headlines and the chapter list and the pictures, what do you see? I think we have an answer as formalists, but I would be very interested in getting the functionalists state on this.

Lucas: All right, so you guys have this belief in the ability to formalize our understanding of consciousness, is this actually contingent on realism or anti realism?

Mike: It is implicitly dependent on realism, that consciousness is real enough to be describable mathematically in a precise sense. And actually that would be my definition of realism, that something is real if we can describe it exactly with mathematics and it is instantiated in the universe. I think the idea of connecting math and consciousness is very core to formalism.

Lucas: What’s particularly interesting here are the you’re making falsifiable claims about phenomenological states. It’s good and exciting that your Symmetry Theory of Valence, which we can get into now has falsifiable aspects. So do you guys want to describe here your Symmetry Theory of Valence and how this fits in and as a consequence of your valence realism?

Andrés: Sure, yeah. I think like one of the key places where this has bearings on is and understanding what is it that we actually want and what is it that we actually like and enjoy. That will be answered in an agent way. So basically you think of agents as entities who spin out possibilities for what actions to take and then they have a way of sorting them by expected utility and then carrying them out. A lot of people may associate what we want or what we like or what we care about at that level, the agent level, whereas we think actually the true source of value is more low level than that. That there’s something else that we’re actually using in order to implement agentive behavior. There’s ways of experiencing value that are completely separated from agents. You don’t actually need to be generating possible actions and evaluating them and enacting them for there to be value or for you to actually be able to enjoy something.

So what we’re examining here is actually what is the lower level property that gives rise even to agentive behavior that underlies every other aspect of experience. These would be a valence and specifically valence gradients. The general claim is that we are set up in such a way that we are basically climbing the valence gradient. This is not true in every situation, but it’s mostly true and it’s definitely mostly true in animals. And then the question becomes what implements valence gradients. Perhaps your intuition is this extraordinary fact that things that have nothing to do with our evolutionary past nonetheless can feel good or bad. So it’s understandable that if you hear somebody scream, you may get nervous or anxious or fearful or if you hear somebody laugh you may feel happy.

That makes sense from an evolutionary point of view, but why would the sound of the Bay Area Rapid Transit, the Bart, which creates these very intense screeching sounds, that is not even within like the vocal range of humans, it’s just really bizarre, never encountered before in our evolutionary past and nonetheless, it has an extraordinarily negative valence. That’s like a hint that valence has to do with patterns, it’s not just goals and actions and utility functions, but the actual pattern of your experience may determine valence. The same goes for a SUBPAC, is this technology that basically renders sounds between 10 and 100 hertz and some of them feel really good, some of them feel pretty unnerving, some of them are anxiety producing and it’s like why would that be the case? Especially when you’re getting two types of input that have nothing to do with our evolutionary past.

It seems that there’s ways of triggering high and low valence states just based on the structure of your experience. The last example I’ll give is very weird states of consciousness like meditation or psychedelics that seem to come with extraordinarily intense and novel forms of experiencing significance or a sense of bliss or pain. And again, they don’t seem to have much semantic content per se or rather the semantic content is not the core reason why they feel that they’re bad. It has to do more with a particular structure that they induce in experience.

Mike: There are many ways to talk about where pain and pleasure come from. We can talk about it in terms of neuro chemicals, opioids, dopamine. We can talk about it in terms of pleasure centers in the brain, in terms of goals and preferences and getting what you want, but all these have counterexamples. All of these have some points that you can follow the thread back to which will beg the question. I think the only way to explain emotional valence, pain and pleasure, that doesn’t beg the question is to explain it in terms of some patterns within phenomenology, just intrinsically feel good and some intrinsically feel bad. To touch back on the formalism brain, this would be saying that if we have a mathematical object that is isomorphic to your phenomenology, to what it feels like to be you, then some pattern or property of this object will refer to or will sort of intrinsically encode you are emotional valence, how pleasant or unpleasant this experiences.

That’s the valence formalism aspect that we’ve come to.

Lucas: So given the valence realism, the view is this intrinsic pleasure, pain axis of the world and this is sort of challenging I guess David Pearce’s view. There are things in experience which are just clearly good seeming or bad seeming. Will MacAskill called these pre theoretic properties we might ascribe to certain kinds of experiential aspects, like they’re just good or bad. So with this valence realism view, this potentiality in this goodness or badness whose nature is sort of self intimatingly disclosed in the physics and in the world since the beginning and now it’s unfolding and expressing itself more so and the universe is sort of coming to life, and embedded somewhere deep within the universe’s structure are these intrinsically good or intrinsically bad valances which complex computational systems and maybe other stuff has access to.

Andrés: Yeah, yeah, that’s right. And I would perhaps emphasize that it’s not only pre-theoretical, it’s pre-agentive, you don’t even need an agent for there to be valence.

Lucas: Right. Okay. This is going to be a good point I think for getting into these other more specific hairy philosophical problems. Could you go ahead and unpack a little bit more this view that pleasure or pain is self intimatingly good or bad that just by existing and experiential relation with the thing its nature is disclosed. Brian Tomasik here, and I think functionalists would say there’s just another reinforcement learning algorithm somewhere before that is just evaluating these phenomenological states. They’re not intrinsically or bad, that’s just what it feels like to be the kind of agent who has that belief.

Andrés: Sure. There’s definitely many angles from which to see this. One of them is by basically realizing that liking, wanting and learning are possible to dissociate, and in particular you’re going to have reinforcement without an associated positive valence. You can have also positive valence without reinforcement or learning. Generally they are correlated but they are different things. My understanding is a lot of people who may think of valence as something we believe matters because you are the type of agent that has a utility function and a reinforcement function. If that was the case, we would expect valence to melt away in states that are non agentive, we wouldn’t necessarily see it. And also that it would be intrinsically tied to intentional content, the aboutness of experience. A very strong counter example is that somebody may claim that really what they truly want this to be academically successful or something like that.

They think of the reward function as intrinsically tied to getting a degree or something like that. I would call that to some extent illusory, that if you actually look at how those preferences are being implemented, that deep down there would be valence gradients happening there. One way to show this would be let’s say the person on the graduation day, you give them an opioid antagonist. The person will subjectively feel that the day is meaningless, you’ve removed the pleasant cream of the experience that they were actually looking for, that they thought all along was tied in with intentional content with the fact of graduating but in fact it was the hedonic gloss that they were after, and that’s kind of like one intuition pump part there.

Lucas: These core problem areas that you’ve identified in Principia Qualia, would you just like to briefly touch on those?

Mike: Yeah, trying to break the problem down into modular pieces with the idea that if we can decompose the problem correctly then the sub problems become much easier than the overall problem and if you collect all the solutions to the sub problem than in aggregate, you get a full solution to the problem of consciousness. So I’ve split things up into the metaphysics, the math and the interpretation. The first question is what metaphysics do you even start with? What ontology do you even try to approach the problem? And we’ve chosen the ontology of physics that can objectively map onto reality in a way that computation can not. Then there’s this question of, okay, so you have your core ontology in this case physics, and then there’s this question of what counts, what actively contributes to consciousness? Do we look at electrons, electromagnetic fields, quarks?

This is an unanswered question. We have hypotheses but we don’t have an answer. Moving into the math, conscious system seemed to have boundaries, if something’s happening inside my head it can directly contribute to my conscious experience. But even if we put our heads together, literally speaking, your consciousness doesn’t bleed over into mine, there seems to be a boundary. So one way of framing this is the boundary problem and one way it’s framing it is the binding problem, and these are just two sides of the same coin. There’s this big puzzle of how do you draw the boundaries of a subject experience. IIT is set up to approach consciousness in itself through this lens that has a certain style of answer, style of approach. We don’t necessarily need to take that approach, but it’s a intellectual landmark. Then we get into things like the state-space problem and the topology of information problem.

If we figured out our basic ontology of what we think is a good starting point and of that stuff, what actively contributes to consciousness, and then we can figure out some principled way to draw a boundary around, okay, this is conscious experience A and this conscious experience B, and they don’t overlap. So you have a bunch of the information inside the boundary. Then there’s this math question of how do you rearrange it into a mathematical object that is isomorphic to what that stuff feels like. And again, IIT has an approach to this, we don’t necessarily ascribe to the exact approach but it’s good to be aware of. There’s also the interpretation problem, which is actually very near and dear to what QRI is working on and this is the concept of if you had a mathematical object that represented what it feels like to be you, how would we even start to figure out what it meant?

Lucas: This is also where the falsifiability comes in, right? If we have the mathematical object and we’re able to formally translate that into phenomenological states, then people can self report on predictions, right?

Mike: Yes. I don’t necessarily fully trust self reports as being the gold standard. I think maybe evolution is tricky sometimes and can lead to inaccurate self report, but at the same time it’s probably pretty good, and it’s the best we have for validating predictions.

Andrés: A lot of this gets easier if we assume that maybe we can be wrong in an absolute sense but we’re often pretty well calibrated to judge relative differences. Maybe you ask me how I’m doing on a scale of one to ten and I say seven and the reality is a five, maybe that’s a problem, but at the same time I like chocolate and if you give me some chocolate and I eat it and that improves my subjective experience and I would expect us to be well calibrated in terms of evaluating whether something is better or worse.

Lucas: There’s this view here though that the brain is not like a classical computer, that it is more like a resonant instrument.

Mike: Yeah. Maybe an analogy here it could be pretty useful. There’s this researcher William Sethares who basically figured out the way to quantify the mutual dissonance between pairs of notes. It turns out that it’s not very hard, all you need to do is add up the pairwise dissonance between every harmonic of the notes. And what that gives you is that if you take for example a major key and you compute the average dissonance between pairs of notes within that major key it’s going to be pretty good on average. And if you take the average dissonance of a minor key it’s going to be higher. So in a sense what distinguishes the minor and a major key is in the combinatorial space of possible permutations of notes, how frequently are they dissonant versus consonant.

That’s a very ground truth mathematical feature of a musical instrument and that’s going to be different from one instrument to the next. With that as a backdrop, we think of the brain and in particular valence in a very similar light that the brain has natural resonant modes and emotions may seem externally complicated. When you’re having a very complicated emotion and we ask you to describe it it’s almost like trying to describe a moment in a symphony, this very complicated composition and how do you even go about it. But deep down the reason why a particular frame sounds pleasant or unpleasant within music is ultimately tractable to the additive per wise dissonance of all of those harmonics. And likewise for a given state of consciousness we suspect that very similar to music the average pairwise dissonance between the harmonics present on a given point in time will be strongly related to how unpleasant the experience is.

These are electromagnetic waves and it’s not exactly like a static or it’s not exactly a standing wave either, but it gets really close to it. So basically what this is saying is there’s this excitation inhibition wave function and that happens statistically across macroscopic regions of the brain. There’s only a discrete number of ways in which that way we can fit an integer number of times in the brain. We’ll give you a link to the actual visualizations for what this looks like. There’s like a concrete example, one of the harmonics with the lowest frequency is basically a very simple one where interviewer hemispheres are alternatingly more excited versus inhibited. So that will be a low frequency harmonic because it is very spatially large waves, an alternating pattern of excitation. Much higher frequency harmonics are much more detailed and obviously hard to describe, but visually generally speaking, the spatial regions that are activated versus inhibited are these very thin wave fronts.

It’s not a mechanical wave as such, it’s a electromagnetic wave. So it would actually be the electric potential in each of these regions of the brain fluctuates, and within this paradigm on any given point in time you can describe a brain state as a weighted sum of all of its harmonics, and what that weighted sum looks like depends on your state of consciousness.

Lucas: Sorry, I’m getting a little caught up here on enjoying resonant sounds and then also the valence realism. The view isn’t that all minds will enjoy resonant things because happiness is like a fundamental valence thing of the world and all brains who come out of evolution should probably enjoy resonance.

Mike: It’s less about the stimulus, it’s less about the exact signal and it’s more about the effect of the signal on our brains. The resonance that matters, the resonance that counts, or the harmony that counts we’d say, or in a precisely technical term, the consonance that counts is the stuff that happens inside our brains. Empirically speaking most signals that involve a lot of harmony create more internal consonance in these natural brain harmonics than for example, dissonant stimuli. But the stuff that counts is inside the head, not the stuff that is going in our ears.

Just to be clear about QRI’s move here, Selen Atasoy has put forth this connectome-specific harmonic wave model and what we’ve done is combined it with our symmetry theory of valence and said this is sort of a way of basically getting a Fourier transform of where the energy is in terms of frequencies of brainwaves in a much cleaner way than has been available through EEG. Basically we can evaluate this data set for harmony. How much harmony is there in a brain, with the link to the Symmetry Theory of Valence then it should be a very good proxy for how pleasant it is to be that brain.

Lucas: Wonderful.

Andrés: In this context, yeah, the Symmetry Theory of Valence would be much more fundamental. There’s probably many ways of generating states of consciousness that are in a sense completely unnatural that are not based on the harmonics of the brain, but we suspect the bulk of the differences in states of consciousness would cash out in differences in brain harmonics because that’s a very efficient way of modulating the symmetry of the state.

Mike: Basically, music can be thought of as a very sophisticated way to hack our brains into a state of greater consonance, greater harmony.

Lucas: All right. People should check out your Principia Qualia, which is the work that you’ve done that captures a lot of this well. Is there anywhere else that you’d like to refer people to for the specifics?

Mike: Principia qualia covers the philosophical framework and the symmetry theory of valence. Andrés has written deeply about this connectome-specific harmonic wave frame and the name of that piece is Quantifying Bliss.

Lucas: Great. I would love to be able to quantify bliss and instantiate it everywhere. Let’s jump in here into a few problems and framings of consciousness. I’m just curious to see if you guys have any comments on ,the first is what you call the real problem of consciousness and the second one is what David Chalmers calls the Meta problem of consciousness. Would you like to go ahead and start off here with just this real problem of consciousness?

Mike: Yeah. So this gets to something we were talking about previously, is consciousness real or is it not? Is it something to be explained or to be explained away? This cashes out in terms of is it something that can be formalized or is it intrinsically fuzzy? I’m calling this the real problem of consciousness, and a lot depends on the answer to this. There are so many different ways to approach consciousness and hundreds, perhaps thousands of different carvings of the problem, panpsychism, we have dualism, we have non materialist physicalism and so on. I think essentially the core distinction, all of these theories sort themselves into two buckets, and that’s is consciousness real enough to formalize exactly or not. This frame is perhaps the most useful frame to use to evaluate theories of consciousness.

Lucas: And then there’s a Meta problem of consciousness which is quite funny, it’s basically like why have we been talking about consciousness for the past hour and what’s all this stuff about qualia and happiness and sadness? Why do people make claims about consciousness? Why does it seem to us that there is maybe something like a hard problem of consciousness, why is it that we experience phenomenological states? Why isn’t everything going on with the lights off?

Mike: I think this is a very clever move by David Chalmers. It’s a way to try to unify the field and get people to talk to each other, which is not so easy in the field. The Meta problem of consciousness doesn’t necessarily solve anything but it tries to inclusively start the conversation.

Andrés: The common move that people make here is all of these crazy things that we think about consciousness and talk about consciousness, that’s just any information processing system modeling its own attentional dynamics. That’s one illusionist frame, but even within qualia realist, qualia formalist paradigm, you still have the question of why do we even think or self reflect about consciousness. You could very well think of consciousness as being computationally relevant, you need to have consciousness and so on, but still lacking introspective access. You could have these complicated conscious information processing systems, but they don’t necessarily self reflect on the quality of their own consciousness. That property is important to model and make sense of.

We have a few formalisms that may give rise to some insight into how self reflectivity happens and in particular how is it possible to model the entirety of your state of consciousness in a given phenomenal object. These ties in with the notion of a homonculei, if the overall valence of your consciousness is actually a signal traditionally used for fitness evaluation, detecting basically when are you in existential risk to yourself or when there’s like reproductive opportunities that you may be missing out on, that it makes sense for there to be a general thermostat of the overall experience where you can just look at it and you get a a sense of the overall well being of the entire experience added together in such a way that you experienced them all at once.

I think like a lot of the puzzlement has to do with that internal self model of the overall well being of the experience, which is something that we are evolutionarily incentivized to actually summarize and be able to see at a glance.

Lucas: So, some people have a view where human beings are conscious and they assume everyone else is conscious and they think that the only place for value to reside is within consciousness, and that a world without consciousness is actually a world without any meaning or value. Even if we think that say philosophical zombies or people who are functionally identical to us but with no qualia or phenomenological states or experiential states, even if we think that those are conceivable, then it would seem that there would be no value in a world of p-zombies. So I guess my question is why does phenomenology matter? Why does the phenomenological modality of pain and pleasure or valence have some sort of special ethical or experiential status unlike qualia like red or blue?

Why does red or blue not disclose some sort of intrinsic value in the same way that my suffering does or my bliss does or the suffering or bliss of other people?

Mike: My intuition is also that consciousness is necessary for value. Nick Bostrom has this wonderful quote in super intelligence that we should be wary of building a Disneyland with no children, some technological wonderland that is filled with marvels of function but doesn’t have any subjective experience, doesn’t have anyone to enjoy it basically. I would just say that I think that most AI safety research is focused around making sure there is a Disneyland, making sure, for example, that we don’t just get turned into something like paperclips. But there’s this other problem, making sure there are children, making sure there are subjective experiences around to enjoy the future. I would say that there aren’t many live research threads on this problem and I see QRI as a live research thread on how to make sure there is subject experience in the future.

Probably a can of worms there, but as your question about in pleasure, I may pass that to my colleague Andrés.

Andrés: Nothing terribly satisfying here. I would go with David Pearce’s view that these properties of experience are self intimating and to the extent that you do believe in value, it will come up as the natural focal points for value, especially if you’re allowed to basically probe the quality of your experience where in many states you believe that the reason why you like something is for intentional content. Again, the case of graduating or it could be the the case of getting a promotion or one of those things that a lot of people associate, with feeling great, but if you actually probe the quality of experience, you will realize that there is this component of it which is its hedonic gloss and you can manipulate it directly again with things like opiate antagonists and if the symmetry theory of valence is true, potentially also by directly modulating the consonance and dissonance of the brain harmonics, in which case the hedonic gloss would change in peculiar ways.

When it comes to consilience, when it comes to many different points of view, agreeing on what aspect of the experience is what brings value to it, it seems to be the hedonic gloss.

Lucas: So in terms of qualia and valence realism, would the causal properties of qualia be the thing that would show any arbitrary mind the self-intimating nature of how good or bad an experience is, and in the space of all possible minds, what is the correct epistemological mechanism for evaluating the moral status of experiential or qualitative states?

Mike: So first of all, I would say that my focus so far has mostly been on describing what is and not what ought. I think that we can talk about valence without necessarily talking about ethics, but if we can talk about valence clearly, that certainly makes some questions in ethics and some frameworks in ethics make much more or less than. So the better we can clearly describe and purely descriptively talk about consciousness, the easier I think a lot of these ethical questions get. I’m trying hard not to privilege any ethical theory. I want to talk about reality. I want to talk about what exists, what’s real and what the structure of what exists is, and I think if we succeed at that then all these other questions about ethics and morality get much, much easier. I do think that there is an implicit should wrapped up in questions about valence, but I do think that’s another leap.

You can accept the valence is real without necessarily accepting that optimizing valence is an ethical imperative. I personally think, yes, it is very ethically important, but it is possible to take a purely descriptive frame to valence, that whether or not this also discloses, as David Pearce said, the utility function of the universe. That is another question and can be decomposed.

Andrés: One framing here too is that we do suspect valence is going to be the thing that matters up on any mind if you probe it in the right way in order to achieve reflective equilibrium. There’s the biggest example of a talk and neuro scientist was giving at some point, there was something off and everybody seemed to be a little bit anxious or irritated and nobody knew why and then one of the conference organizers suddenly came up to the presenter and did something to the microphone and then everything sounded way better and everybody was way happier. There was these very sorrow hissing pattern caused by some malfunction of the microphone and it was making everybody irritated, they just didn’t realize that was the source of the irritation, and when it got fixed then you know everybody’s like, “Oh, that’s why I was feeling upset.”

We will find that to be the case over and over when it comes to improving valence. So like somebody in the year 2050 might come up to one of the connectome-specific harmonic wave clinics, “I don’t know what’s wrong with me,” but if you put them through the scanner we identify your 17th and 19th harmonic in a state of dissonance. We cancel 17th to make it more clean, and then the person who will say all of a sudden like, “Yeah, my problem is fixed. How did you do that?” So I think it’s going to be a lot like that, that the things that puzzle us about why do I prefer these, why do I think this is worse, will all of a sudden become crystal clear from the point of view of valence gradients objectively measured.

Mike: One of my favorite phrases in this context is what you can measure you can manage and if we can actually find the source of dissonance in a brain, then yeah, we can resolve it, and this could open the door for maybe honestly a lot of amazing things, making the human condition just intrinsically better. Also maybe a lot of worrying things, being able to directly manipulate emotions may not necessarily be socially positive on all fronts.

Lucas: So I guess here we can begin to jump into AI alignment and qualia. So we’re building AI systems and they’re getting pretty strong and they’re going to keep getting stronger potentially creating a superintelligence by the end of the century and consciousness and qualia seems to be along the ride for now. So I’d like to discuss a little bit here about more specific places in AI alignment where these views might inform it and direct it.

Mike: Yeah, I would share three problems of AI safety. There’s the technical problem, how do you make a self improving agent that is also predictable and safe. This is a very difficult technical problem. First of all to even make the agent but second of all especially to make it safe, especially if it becomes smarter than we are. There’s also the political problem, even if you have the best technical solution in the world and the sufficiently good technical solution doesn’t mean that it will be put into action in a sane way if we’re not in a reasonable political system. But I would say the third problem is what QRI is most focused on and that’s the philosophical problem. What are we even trying to do here? What is the optimal relationship between AI and humanity and also a couple of specific details here. First of all I think nihilism is absolutely an existential threat and if we can find some antidotes to nihilism through some advanced valence technology that could be enormously helpful for reducing X-risk.

Lucas: What kind of nihilism or are you talking about here, like nihilism about morality and meaning?

Mike: Yes, I would say so, and just personal nihilism that it feels like nothing matters, so why not do risky things?

Lucas: Whose quote is it, the philosophers question like should you just kill yourself? That’s the yawning abyss of nihilism inviting you in.

Andrés: Albert Camus. The only real philosophical question is whether to commit suicide, whereas how I think of it is the real philosophical question is how to make love last, bringing value to the existence, and if you have value on tap, then the question of whether to kill yourself or not seems really nonsensical.

Lucas: For sure.

Mike: We could also say that right now there aren’t many good shelling points for global coordination. People talk about having global coordination and building AGI would be a great thing but we’re a little light on the details of how to do that. If the clear, comprehensive, useful, practical understanding of consciousness can be built, then this may sort of embody or generate new shelling points that the larger world could self organize around. If we can give people a clear understanding of what is and what could be, then I think we will get a better future that actually gets built.

Lucas: Yeah. Showing what is and what could be is immensely important and powerful. So moving forward with AI alignment as we’re building these more and more complex systems, there’s this needed distinction between unconscious and conscious information processing, if we’re interested in the morality and ethics of suffering and joy and other conscious states. How do you guys see the science of consciousness here, actually being able to distinguish between unconscious and conscious information processing systems?

Mike: There are a few frames here. One is that, yeah, it does seem like the brain does some processing in consciousness and some processing outside of consciousness. And what’s up with that, this could be sort of an interesting frame to explore in terms of avoiding things like mind crime in the AGI or AI space that if there are certain computations which are painful then don’t do them in a way that would be associated with consciousness. It would be very good to have rules of thumb here for how to do that. One interesting could be in the future we might not just have compilers which optimize for speed of processing or minimization of dependent libraries and so on, but could optimize for the valence of the computation on certain hardware. This of course gets into complex questions about computationalism, how hardware dependent this compiler would be and so on.

I think it’s an interesting and important long-term frame.

Lucas: So just illustrate here I think the ways in which solving or better understanding consciousness will inform AI alignment from present day until super intelligence and beyond.

Mike: I think there’s a lot of confusion about consciousness and a lot of confusion about what kind of thing the value problem is in AI Safety, and there are some novel approaches on the horizon. I was speaking with Stuart Armstrong the last EA global and he had some great things to share about his model fragments paradigm. I think this is the right direction. It’s sort of understanding, yeah, human preferences are insane. Just they’re not a consistent formal system.

Lucas: Yeah, we contain multitudes.

Mike: Yes, yes. So first of all understanding what generates them seems valuable. So there’s this frame in AI safety we call the complexity value thesis. I believe Eliezer came up with it in a post on Lesswrong. It’s this frame where human value is very fragile in that it can be thought of as a small area, perhaps even almost a point in a very high dimensional space, say a thousand dimensions. If we go any distance in any direction from this tiny point in this high dimensional space, then we quickly get to something that we wouldn’t think of as very valuable. And maybe if we leave everything the same and take away freedom, this paints a pretty sobering picture of how difficult AI alignment will be.

I think this is perhaps arguably the source of a lot of worry in the community, that not only do we need to make machines that won’t just immediately kill us, but that will preserve our position in this very, very high dimensional space well enough that we keep the same trajectory and that possibly if we move at all, then we may enter a totally different trajectory, that we in 2019 wouldn’t think of as having any value. So this problem becomes very, very intractable. I would just say that there is an alternative frame. The phrasing that I’m playing around with here it is instead of the complexity of value thesis, the unity of value thesis, it could be that many of the things that we find valuable, eating ice cream, living in a just society, having a wonderful interaction with a loved one, all of these have the same underlying neural substrate and empirically this is what affective neuroscience is finding.

Eating a chocolate bar activates same brain regions as a transcendental religious experience. So maybe there’s some sort of elegant compression that can be made and that actually things aren’t so starkly strict. We’re not sort of this point in a super high dimensional space and if we leave the point, then everything of value is trashed forever, but maybe there’s some sort of convergent process that we can follow that we can essentialize. We can make this list of 100 things that humanity values and maybe they all have in common positive valence, and positive valence can sort of be reverse engineered. And to some people this feels like a very scary dystopic scenario – don’t knock it until you’ve tried it – but at the same time there’s a lot of complexity here.

One core frame that the idea of qualia formalism and valence realism can offer AI safety is that maybe the actual goal is somewhat different than the complexity of value thesis puts forward. Maybe the actual goal is different and in fact easier. I think this could directly inform how we spend our resources on the problem space.

Lucas: Yeah, I was going to say that there exists standing tension between this view of the complexity of all preferences and values that human beings have and then the valence realist view which says that what’s ultimately good or certain experiential or hedonic states. I’m interested and curious about if this valence view is true, whether it’s all just going to turn into hedonium in the end.

Mike: I’m personally a fan of continuity. I think that if we do things right we’ll have plenty of time to get things right and also if we do things wrong then we’ll have plenty of time for things to be wrong. So I’m personally not a fan of big unilateral moves, it’s just getting back to this question of can understanding what is help us, clearly yes.

Andrés: Yeah. I guess one view is we could say preserve optionality and learn what is, and then from there hopefully we’ll be able to better inform oughts and with maintained optionality we’ll be able to choose the right thing. But that will require a cosmic level of coordination.

Mike: Sure. An interesting frame here is whole brain emulation. So whole brain emulation is sort of a frame built around functionalism and it’s a seductive frame I would say. If whole brain emulations wouldn’t necessarily have the same qualia based on hardware considerations as the original humans, there could be some weird lock in effects where if the majority of society turned themselves into p-zombies then it may be hard to go back on that.

Lucas: Yeah. All right. We’re just getting to the end here, I appreciate all of this. You guys have been tremendous and I really enjoyed this. I want to talk about identity in AI alignment. This sort of taxonomy that you’ve developed about open individualism and closed individualism and all of these other things. Would you like to touch on that and talk about implications here in AI alignment as you see it?

Andrés: Yeah. Yeah, for sure. The taxonomy comes from Daniel Kolak, a philosopher and mathematician. It’s a pretty good taxonomy and basically it’s like open individualism, that’s the view that a lot of meditators and mystics and people who take psychedelics often ascribe to, which is that we’re all one consciousness. Another frame is that our true identity is the light of consciousness, so to speak. So it doesn’t matter in what form it manifests, it’s always the same fundamental ground of being. Then you have the common sense view, it’s called closed individualism. You start existing when you’re born, you stop existing when you die. You’re just this segment. Some religions actually extend that into the future or past with reincarnation or maybe with heaven.

It’s the belief in ontological distinction between you and others while at the same time there is ontological continuity from one moment to the next within you. Finally you have this view that’s called empty individualism, which is that you’re just a moment of experience. That’s fairly common among physicists and a lot of people who’ve tried to formalize consciousness, often they converged on empty individualism. I think a lot of theories of ethics and rationality, like the veil of ignorance as a guide or like how do you define rational decision-making as maximizing the expected utility of yourself as an agent, all of those seem to implicitly be based on closed individualism and they’re not necessarily questioning it very much.

On the other hand, if the sense of individual identity of closed individualism doesn’t actually carve nature at its joints as a Buddhist might say, the feeling of continuity of being a separate unique entity is an illusory construction of your phenomenology that casts in a completely different light how to approach rationality itself and even self interest, right? If you start identifying with the light of consciousness rather than your particular instantiation, you will probably care a lot more about what happens to pigs in factory farms because … In so far as they are conscious they are you in a fundamental way. It matters a lot in terms of how to carve out different possible futures, especially when you get into these very tricky situations like, well what if there is mind melding or what if there is the possibility of making perfect copies of yourself?

All of these edge cases are really problematic from the common sense view of identity, but they’re not really a problem from an open individualist or empty individualist point of view. With all of this said, I do personally think there’s probably a way of combining open individualism with valence realism that gives rise to the next step in human rationality where we’re actually trying to really understand what the universe wants, so to speak. But I would say that there is a very tricky aspect here that has to do with game theory. We evolved to believe in close individualism. The fact that it’s evolutionarily adaptive is obviously not an argument for it being fundamentally true, but it does seem to be some kind of an evolutionarily stable point to believe of yourself as who you can affect the most directly in a causal way, if you define your boundary that way.

That basically gives you focus on the actual degrees of freedom that you do have, and if you think of a society of open individualists, everybody’s altruistically maximally contributing to the universal consciousness, and then you have one close individualist who is just selfishly trying to acquire power just for itself, you can imagine that one view would have a tremendous evolutionary advantage in that context. So I’m not one who just naively advocates for open individualism unreflectively. I think we still have to work out to the game theory of it, how to make it evolutionarily stable and also how to make it ethical. Open question, I do think it’s important to think about and if you take consciousness very seriously, especially within physicalism, that usually will cast huge doubts on the common sense view of identity.

It doesn’t seem like a very plausible view if you actually tried to formalize consciousness.

Mike: The game theory aspect is very interesting. You can think of closed individualism as something evolutionists produced that allows an agent to coordinate very closely with its past and future ourselves. Maybe we can say a little bit about why we’re not by default all empty individualists or open individualists. Empty individualism seems to have a problem where if every slice of conscious experience is its own thing, then why should you even coordinate with your past and future self because they’re not the same as you. So that leads to a problem of defection, and open individualism is everything is the same being so to speak than … As Andrés mentioned that allows free riders, if people are defecting, it doesn’t allow altruist punishment or any way to stop the free ride. There’s interesting game theory here and it also just feeds into the question of how we define our identity in the age of AI, the age of cloning, the age of mind uploading.

This gets very, very tricky very quickly depending on one’s theory of identity. They’re opening themselves up to getting hacked in different ways and so different theories of identity allow different forms of hacking.

Andrés: Yeah, which could be sometimes that’s really good and sometimes really bad. I would make the prediction that not necessarily open individualism in its full fledged form but a weaker sense of identity than closed individualism is likely going to be highly adaptive in the future as people basically have the ability to modify their state of consciousness in much more radical ways. People who just identify with narrow sense of identity will just be in their shells, not try to disturb the local attractor too much. That itself is not necessarily very advantageous. If the things on offer are actually really good, both hedonically and intelligence wise.

I do suspect basically people who are somewhat more open to basically identify with consciousness or at least identify with a broader sense of identity, they will be the people who will be doing more substantial progress, pushing the boundary and creating new cooperation and coordination technology.

Lucas: Wow, I love all that. Seeing closed individualism for what it was has had a tremendous impact on my life and this whole question of identity I think is largely confused for a lot of people. At the beginning you said that open individualism says that we are all one consciousness or something like this, right? For me in identity I’d like to move beyond all distinctions of sameness or differenceness. To say like, oh, we’re all one consciousness to me seems to say we’re all one electromagnetism, which is really to say the consciousness is like an independent feature or property of the world that’s just sort of a ground part of the world and when the world produces agents, consciousness is just an empty identityless property that comes along for the ride.

The same way in which it would be nonsense to say, “Oh, I am these specific atoms, I am just the forces of nature that are bounded within my skin and body” That would be nonsense. In the same way in sense of what we were discussing with consciousness there was the binding problem of the person, the discreteness of the person. Where does the person really begin or end? It seems like these different kinds of individualism have, as you said, epistemic and functional use, but they also, in my view, create a ton of epistemic problems, ethical issues, and in terms of the valence theory, if quality is actually something good or bad, then as David Pearce says, it’s really just an epistemological problem that you don’t have access to other brain states in order to see the self intimating nature of what it’s like to be that thing in that moment.

There’s a sense in which i want to reject all identity as arbitrary and I want to do that in an ultimate way, but then in the conventional way, I agree with you guys that there are these functional and epistemic issues that closed individualism seems to remedy somewhat and is why evolution, I guess selected for it, it’s good for gene propagation and being selfish. But once one sees AI as just a new method of instantiating bliss, it doesn’t matter where the bliss is. Bliss is bliss and there’s no such thing as your bliss or anyone else’s bliss. Bliss is like its own independent feature or property and you don’t really begin or end anywhere. You are like an expression of a 13.7 billion year old system that’s playing out.

The universe is just peopleing all of us at the same time, and when you get this view and you see you as just sort of like the super thin slice of the evolution of consciousness and life, for me it’s like why do I really need to propagate my information into the future? Like I really don’t think there’s anything particularly special about the information of anyone really that exists today. We want to preserve all of the good stuff and propagate those in the future, but people who seek a immortality through AI or seek any kind of continuation of what they believe to be their self is, I just see that all as misguided and I see it as wasting potentially better futures by trying to bring Windows 7 into the world of Windows 10.

Mike: This all gets very muddy when we try to merge human level psychological drives and concepts and adaptations with a fundamental physics level description of what is. I don’t have a clear answer. I would say that it would be great to identify with consciousness itself, but at the same time, that’s not necessarily super easy if you’re suffering from depression or anxiety. So I just think that this is going to be an ongoing negotiation within society and just hopefully we can figure out ways in which everyone can move.

Andrés: There’s an article I wrote it, I just called it consciousness versus replicators. That kind of gets to the heart of this issue, but that sounds a little bit like good and evil, but it really isn’t. The true enemy here is replication for replication’s sake. On the other hand, the only way in which we can ultimately benefit consciousness, at least in a plausible, evolutionarily stable way is through replication. We need to find the balance between replication and benefit of consciousness that makes the whole system stable, good for consciousness and resistant against the factors.

Mike: I would like to say that I really enjoy Max Tegmark’s general frame of you leaving this mathematical universe. One re-frame of what we were just talking about in these terms are there are patterns which have to do with identity and have to do with valence and have to do with many other things. The grand goal is to understand what makes a pattern good or bad and optimize our light cone for those sorts of patterns. This may have some counter intuitive things, maybe closed individualism is actually a very adaptive thing, in the long term it builds robust societies. Could be that that’s not true but I just think that taking the mathematical frame and the long term frame is a very generative approach.

Lucas: Absolutely. Great. I just want to finish up here on two fun things. It seems like good and bad are real in your view. Do we live in heaven or hell?

Mike: Lot of quips that come to mind here. Hell is other people, or nothing is good or bad but thinking makes it so. My pet theory I should say is that we live in something that is perhaps close to heaven as is physically possible. The best of all possible worlds.

Lucas: I don’t always feel that way but why do you think that?

Mike: This gets through the weeds of theories about consciousness. It’s this idea that we tend to think of consciousness on the human scale. Is the human condition good or bad, is the balance of human experience on the good end, the heavenly end or the hellish end. If we do have an objective theory of consciousness, we should be able to point it at things that are not human and even things that are not biological. It may seem like a type error to do this but we should be able to point it at stars and black holes and quantum fuzz. My pet theory, which is totally not validated, but it is falsifiable, and this gets into Bostrom’s simulation hypothesis, it could be that if we tally up the good valence and the bad valence in the universe, that first of all, the human stuff might just be a rounding error.

Most of the value, in this value the positive and negative valence is found elsewhere, not in humanity. And second of all, I have this list in the last appendix of Principia Qualia as well, where could massive amounts of consciousness be hiding in the cosmological sense. I’m very suspicious that the big bang starts with a very symmetrical state, I’ll just leave it there. In a utilitarian sense, if you want to get a sense of whether we live in a place closer to heaven or hell we should actually get a good theory of consciousness and we should point to things that are not humans and cosmological scale events or objects would be very interesting to point it at. This will give a much better clear answer as to whether we live in somewhere closer to heaven or hell than human intuition.

Lucas: All right, great. You guys have been super generous with your time and I’ve really enjoyed this and learned a lot. Is there anything else you guys would like to wrap up on?

Mike: Just I would like to say, yeah, thank you so much for the interview and reaching out and making this happen. It’s been really fun on our side too.

Andrés: Yeah, I think wonderful questions and it’s very rare for an interviewer to have non conventional views of identity to begin with, so it was really fun, really appreciate it.

Lucas: Would you guys like to go ahead and plug anything? What’s the best place to follow you guys, Twitter, Facebook, blogs, website?

Mike: Our website is qualiaresearchinstitute.org and we’re working on getting a PayPal donate button out but in the meantime you can send us some crypto. We’re building out the organization and if you want to read our stuff a lot of it is linked from the website and you can also read my stuff at my blog, opentheory.net and Andrés’ is @qualiacomputing.com.

Lucas: If you enjoyed this podcast, please subscribe, give it a like or share it on your preferred social media platform. We’ll be back again soon with another episode in the AI Alignment series.


Featured image credit: Alex Grey

Utilitronium Shockwaves vs. Gradients of Bliss

Excerpt from On utilitronium shockwaves versus gradients of bliss by David Pearce


26165927_411803139252028_2424920785910033575_n

Utilitronium Shockwave: Turn your local Galaxy Super-Cluster into a Full-Spectrum Orgasm in 9 easy civilizational steps.

Why is the idea of life animated by gradients of intelligent bliss attractive, at least to some of us, whereas the prospect of utilitronium leaves almost everyone cold? One reason is the anticipated loss of self: if one’s matter and energy were converted into utilitronium, then intuitively the intense undifferentiated bliss wouldn’t be me. By contrast, even a radical recalibration of one’s hedonic set-point intuitively preserves the greater part of one’s values, memories and existing preference architecture: in short, personal identity. Whether such preservation of self would really obtain if life were animated by gradients of bliss, and whether such notional continuity is ethically significant, and whether the notion of an enduring metaphysical ego is even intellectually coherent, is another matter. Regardless of our answers to such questions, there is a tension between our divergent response to the prospect of cosmos-wide utilitronium and intelligent bliss. People rarely complain that e.g. orgasmic sexual ecstasy lasts too long, and that regrettably they lose their sense of personal identity while orgasm lasts. On the contrary: behavioural evidence strongly suggests that most men in particular reckon sexual bliss is too short-lived and infrequent. Indeed if such sexual bliss were available indefinitely, and if it were characterised by an intensity orders of magnitude greater than the best human orgasms, then would anyone – should anyone – wish such ecstasy to stop? Subjectively, utilitronium presumably feels more sublime than sexual bliss, or even whole-body orgasm. Granted the feasibility of such heavenly bliss, is viewing the history of life on Earth to date as mere stepping-stones to cosmic nirvana really so outrageous?

50622466_2567629613252096_9137406638632730624_n

Is attachment to your sense of self keeping you from embracing hedonium? Stop ‘Selfing’ with these 3 buddhist-approved Techniques!

For the foreseeable future, however, even strict classical utilitarians must work for information-sensitive gradients of intelligent bliss rather than raw undifferentiated pleasure. Classical hedonistic utilitarianism was originally formulated as an ethic for legislators, not biologists or computer scientists. Conceived in this light, the felicific calculus has been treated as infeasible. Yet a disguised implication of a classical utilitarian ethic in an era of mature biotechnology may be that we should be seeking to convert the world into utilitronium, generally assumed to be relatively homogenous matter and energy optimised for raw bliss. The “shockwave” in utilitronium shockwave alludes to our hypothetical obligation to launch von Neumann probes propagating this hyper-valuable state of matter and energy at, or nearly at, the velocity of light across our Galaxy, then our Local Cluster, and then our Local Supercluster. And beyond? Well, politics is the art of the possible. The accelerating expansion of the universe would seem to make further utilitronium propagation infeasible even with utopian technologies. Such pessimism assumes our existing understanding of theoretical physics is correct; but theoretical cosmology is currently in a state of flux.

28828443_203732607067112_3802128252597315479_o

Utilitronium Shockwave? (cf. Hedonium)

Naively, the theoretical feasibility of utilitronium shockwave is too remote to sorry about. This question might seem a mere philosophical curiosity. But not so. Complications of uncertain outcome aside, any rate of time discounting indistinguishable from zero is ethically unacceptable for the ethical utilitarian. So on the face of it, the technical feasibility of a utilitronium shockwave makes working for its adoption ethically mandatory even if the prospect is centuries or millennia distant.

Existential Risk? Utilitarian ethics and speculative cosmology might seem far removed. But perhaps the only credible candidate naturalising value has seemingly apocalyptic implications that have never (to my knowledge) been explored in the scholarly literature. And can we seriously hope to be effective altruists in the absence of serviceable model of Reality?

uFWNbjEl

All-New “Life”! – Now animated by gradients of bliss. Pain-free!

Should existential risk reduction be the primary goal of: a) negative utilitarians? b) classical hedonistic utilitarians? c) preference utilitarians? All, or none, of the above? The answer is far from obvious. For example, one might naively suppose that a negative utilitarian would welcome human extinction. But only (trans)humans – or our potential superintelligent successors – are technically capable of phasing out the cruelties of the rest of the living world on Earth. And only (trans)humans – or rather our potential superintelligent successors – are technically capable of assuming stewardship of our entire Hubble volume. Conceptions of the meaning of the term “existential risk” differ. Compare David Benatar’s “Better Never To Have Been” with Nick Bostrom’s “Astronomical Waste“. Here at least, we will use the life-affirming sense of the term. Does negative utilitarianism or classical utilitarianism represent the greater threat to intelligent life in the cosmos? Arguably, we have our long-term existential risk-assessment back-to-front. A negative utilitarian believes that once intelligent agents have phased out the biology of suffering, all our ethical duties have been discharged. But the classical utilitarian seems ethically committed to converting all accessible matter and energy – not least human and nonhuman animals – into relatively homogeneous matter optimised for maximum bliss: “utilitronium”.

Ramifications? Severe curtailment of personal liberties in the name of Existential Risk Reduction is certainly conceivable. Assume, for example, that the technical knowledge of how to create and deploy readily transmissible, 100% lethal, delayed-action weaponised pathogens leaks into the public domain. Only the most Orwellian measures – a perpetual global totalitarianism – could hope to prevent their use, whether by a misanthrope or an idealist. Such measures would most likely fail. By contrast, constitutively happy people would be incapable of envisaging the development and use of such a doomsday agent. The biology of suffering in intelligent agents is a deep underlying source of existential risk – and one that can potentially be overcome.

23669227_1265909923515152_9096093177323876840_o

Gradients of Bliss world in a Hedonium Universe? – “Central Realm of the Densely-Packed.”

A theoretically inelegant but pragmatically effective compromise solution might be to initiate a utilitronium shockwave that propagates outside the biosphere – or realm of posthuman civilisation. The world within our cosmological horizon could then be tiled with utilitronium with the exception of a negligible island (or archipelago) of minds animated “merely” by gradients of intelligent bliss. One advantage of this hybrid option is that most refusniks would (presumably) be indifferent to the fate of inert matter and energy outside their lifeworld. Ask someone today whether they’d mind if some anonymous rock on the far side of the moon were converted into utilitronium and they’d most likely shrug.

bsj0p7j1yl821

Shrugging at the prospect of hedonium rocks on the moon.

In future, gradients of intelligent bliss orders of magnitude richer than today’s peak experiences could well be a design feature of the post-human mind. However, I don’t think intracranial self-stimulation is consistent with intelligence or critical insight. This is because it is uniformly rewarding. Intelligence depends on informational sensitivity to positive and negative stimuli – even if “negative” posthuman hedonic dips are richer and higher than the human hedonic ceiling.

In contrast to life animated by gradients of bliss, the prospect of utilitronium cannot motivate. Or rather the prospect can motivate only a rare kind of hyper-systematiser drawn to its simplicity and elegance. The dips of intelligent bliss need not be deep […] Everyday hedonic tone could be orders of magnitude richer than anything physiologically feasible now. But will such well-being be orgasmic? Orgasmic bliss lacks – in the jargon of academic philosophy – an “intentional object”. So presumably there will be selection pressure against any predisposition to enjoy 24/7 orgasms. By contrast, information-sensitive gradients of intelligent bliss can be adaptive – and hence sustainable indefinitely, allowing universe maintenance: responsible stewardship of Hubble volume.

mokumegane.burris.2018.3

Can Life and Hedonium get Married? Express your eternal love with Hedonium Jewelry! Made of 99.99% Pure Bliss! (Guaranteed by Hilbert Space Hamiltonian Assay – Lab Tested Hedonium!)

At any rate, posthumans may regard even human “peak experiences” as indescribably dull by comparison.


Image credit for the Buddhist monk picture “Is”: Alex William Hoffman.

Thoughts on the ‘Is-Ought Problem’ from a Qualia Realist Point of View

tl;dr If we construct a theory of meaning grounded in qualia and felt-sense, it is possible to congruently arrive at “should” statements on the basis of reason and “is” claims. Meaning grounded in qualia allows us to import the pleasure-pain axis and its phenomenal character to the same plane of discussion as factual and structural observations.

Introduction

The Is-Ought problem (also called “Hume’s guillotine”) is a classical philosophical conundrum. On the one hand people feel that our ethical obligations (at least the uncontroversial ones like “do not torture anyone for no reason”) are facts about reality in some important sense, but on the other hand, rigorously deriving such “moral facts” from facts about the universe appears to be a category error. Is there any physical fact that truly compels us to act in one way or another?

A friend recently asked about my thoughts on this question and I took the time to express them to the best of my knowledge.

Takeaways

I provide seven points of discussion that together can be used to make the case that “ought” judgements often, though not always, are on the same ontological footing as “is” claims. Namely, that they are references to the structure and quality of experience, whose ultimate nature is self-intimating (i.e. it reveals itself) and hence inaccessible to those who lack the physiological apparatus to instantiate it. In turn, we could say that within communities of beings who share the same self-intimating qualities of experience, the is/ought divide may not be completely unbridgeable.


Summaries of Question and Response

Summary of the question:

How does a “should” emerge at all? How can reason and/or principles and/or logic compel us to follow some moral code?

Summary of the response:

  1. If “ought” statements are to be part of our worldview, then they must refer to decisions about experiences: what kinds of experiences are better/worse, what experiences should or should not exist, etc.
  2. A shared sense of personal identity (e.g. Open Individualism – which posits that “we are all one consciousness”) allows us to make parallels between the quality of our experience and the experience of others. Hence if one grounds “oughts” on the self-intimating quality of one’s suffering, then we can also extrapolate that such “oughts” must exist in the experience of other sentient beings and that they are no less real “over there” simply because a different brain is generating them (general relativity shows that every “here and now” is equally real).
  3. Reduction cuts both ways: if the “fire in the equations of physics” can feel a certain way (e.g. bliss/pain) then objective causal descriptions of reality (about e.g. brain states) are implicitly referring to precisely that which has an “ought” quality. Thus physics may be inextricably connected with moral “oughts”.
  4. If one loses sight of the fact that one’s experience is the ultimate referent for meaning, it is possible to end up in nihilistic accounts of meaning (e.g. such as Quine’s Indeterminacy of translation and Dennett’s inclusion of qualia within that framework). But if one grounds meaning in qualia, then suddenly both causality and value are on the same ontological footing (cf. Valence Realism).
  5. To see clearly the nature of value it is best to examine it at its extremes (such as MDMA bliss vs. the pain of kidney stones). Having such experiences illuminates the “ought” aspect of consciousness, in contrast to the typical quasi-anhedonic “normal everyday states of consciousness” that most people (and philosophers!) tend to reason from. It would be interesting to see philosophers discuss e.g. the Is-Ought problem while on MDMA.
  6. Claims that “pleasure and pain, value and disvalue, good and bad, etc.” are an illusion by long-term meditators based on the experience of “dissolving value” in meditative states are no more valid than claims that pain is an illusion by someone doped on morphine. In brief: such claims are made in a state of consciousness that has lost touch with the actual quality of experience that gives (dis)value to consciousness.
  7. Admittedly the idea that one state of consciousness can even refer to (let alone make value judgements about) other states of consciousness is very problematic. In what sense does “reference” even make sense? Every moment of experience only has access to its own content. We posit that this problem is not ultimately unsolvable, and that human concepts are currently mere prototypes of a much better future set of varieties of consciousness optimized for truth-finding. As a thought experiment to illustrate this possible future, consider a full-spectrum superintelligence capable of instantiating arbitrary modes of experience and impartially comparing them side by side in order to build a total order of consciousness.

Full Question and Response

Question:

I realized I don’t share some fundamental assumptions that seemed common amongst the people here [referring to the Qualia Research Institute and friends].

The most basic way I know how to phrase it, is the notion that there’s some appeal to reason and/or principles and/or logic that compels us to follow some type of moral code.

A (possibly straw-man) instance is the notion I associate with effective altruism, namely, that one should choose a career based on its calculable contribution to human welfare. The assumption is that human welfare is what we “should” care about. Why should we? What’s compelling about trying to reconfigure ourselves from whatever we value at the moment to replacing that thing with human welfare (or anything else)? What makes us think we can even truly succeed in reconfiguring ourselves like this? The obvious pitfall seems to be we create some image of “goodness” that we try to live up to without ever being honest with ourselves and owning our authentic desires. IMO this issue is rampant in mainstream Christianity.

More generally, I don’t understand how a “should” emerges within moral philosophy at all. I understand how starting with a want, say happiness, and noting a general tendency, such as I become happy when I help others, that one could deduce that helping others often is likely to result in a happy life. I might even say “I should help others” to myself, knowing it’s a strategy to get what I want. That’s not the type of “should” I’m talking about. What I’m talking about is “should” at the most basic level of one’s value structure. I don’t understand how any amount of reasoning could tell us what our most basic values and desires “should” be.

I would like to read something rigorous on this issue. I appreciate any references, as well as any elucidating replies.

Response:

This is a very important topic. I think it is great that you raise this question, as it stands at the core of many debates and arguments about ethics and morality. I think that one can indeed make a really strong case for the view that “ought” is simply never logically implied by any accurate and objective description of the world (the famous is/ought Humean guillotine). I understand that an objective assessment of all that is will usually be cast as a network of causal and structural relationships. By starting out with a network of causal and structural relationships and using logical inferences to arrive at further high-level facts, one is ultimately bound to arrive at conclusions that themselves are just structural and causal relationships. So where does the “ought” fit in here? Is it really just a manner of speaking? A linguistic spandrel that emerges from evolutionary history? It could really seem like it, and I admit that I do not have a silver bullet argument against this view.

However, I do think that eventually we will arrive at a post-Galilean understanding of consciousness, and that this understanding will itself allow us to point out exactly where- if at all- ethical imperatives are located and how they emerge. For now all I have is a series of observations that I hope can help you develop an intuition for how we are thinking about it, and why our take is original and novel (and not simply a rehashing of previous arguments or appeals to nature/intuition/guilt).

So without further ado I would like to lay out the following points on the table:

  1. I am of the mind that if any kind of “ought” is present in reality it will involve decision-making about the quality of consciousness of subjects of experience. I do not think that it makes sense to talk about an ethical imperative that has anything to do with non-experiential properties of the universe precisely because there would be no one affected by it. If there is an argument for caring about things that have no impact on any state of consciousness, I have yet to encounter it. So I will assume that the question refers to whether certain states of consciousness ought to or ought not to exist (and how to make trade offs between them).
  2. I also think that personal identity is key for this discussion, but why this is the case will make sense in a moment. The short answer is that conscious value is self-intimating/self-revealing, and in order to pass judgement on something that you yourself (as a narrative being) will not get to experience you need some confidence (or reasonable cause) to believe that the same self-intimating quality of experience is present in other narrative orbits that will not interact with you. For the same reasons as (1) above, it makes no sense to care about philosophical zombies (no matter how much they scream at you), but the same is the case for “conscious value p. zombies” (where maybe they experience color qualia but do not experience hedonic tone i.e. they can’t suffer).
  3. A very important concept that comes up again and again in our research is the notion that “reduction cuts both ways”. We take dual aspect monism seriously, and in this view we would consider the mathematical description of an experience and its qualia two sides of the same coin. Now, many people come here and say “the moment you reduce an experience of bliss to a mathematical equation you have removed any fuzzy morality from it and arrived at a purely objective and factual account which does not support an ‘ought ontology'”. But doing this mental move requires you to take the mathematical account as a superior ontology to that of the self-intimating quality of experience. In our view, these are two sides of the same coin. If mystical experiences are just a bunch of chemicals, then a bunch of chemicals can also be a mystical experience. To reiterate: reduction cuts both ways, and this happens with the value of experience to the same extent as it happens with the qualia of e.g. red or cinnamon.
  4. Mike Johnson tends to bring up Wittgenstein and Quine to the “Is-Ought” problem because they are famous for ‘reducing language and meaning’ to games and networks of relationships. But here you should realize that you can apply the concept developed in (3) above just as well to this matter. In our view, a view of language that has “words and objects” at its foundation is not a complete ontology, and nor is one that merely introduces language games to dissolve the mystery of meaning. What’s missing here is “felt sense” – the raw way in which concepts feel and operate on each other whether or not they are verbalized. It is my view that here phenomenal binding becomes critical because a felt sense that corresponds to a word, concept, referent, etc. in itself encapsulates a large amount of information simultaneously, and contains many invariants across a set of possible mental transformations that define what it is and what it is not. More so, felt senses are computationally powerful (rather than merely epiphenomenal). Consider Daniel Tammet‘s mathematical feats achieved by experiencing numbers in complex synesthetic ways that interact with each other in ways that are isomorphic to multiplication, factorization, etc. More so, he does this at competitive speeds. Language, in a sense, could be thought of as the surface of felt sense. Daniel Dennett famously argued that you can “Quine Qualia” (meaning that you can explain it away with a groundless network of relationships and referents). We, on the opposite extreme, would bite the bullet of meaning and say that meaning itself is grounded in felt-sense and qualia. Thus, colors, aromas, emotions, and thoughts, rather than being ultimately semantically groundless as Dennett would have it, turn out to be the very foundation of meaning.
  5. In light of the above, let’s consider some experiences that embody the strongest degree of the felt sense of “ought to be” and “ought not to be” that we know of. On the negative side, we have things like cluster headaches and kidney stones. On the positive side we have things like Samadhi, MDMA, and 5-MEO-DMT states of consciousness. I am personally more certain that the “ought not to be” aspect of experience is more real than the “ought to be” aspect of it, which is why I have a tendency (though no strong commitment) towards negative utilitarianism. When you touch a hot stove you get this involuntary reaction and associated valence qualia of “reality needs you to recoil from this”, and in such cases one has degrees of freedom into which to back off. But when experiencing cluster headaches and kidney stones, this sensation- that self-intimating felt-sense of ‘this ought not to be’- is omnidirectional. The experience is one in which one feels like every direction is negative, and in turn, at its extremes, one feels spiritually violated (“a major ethical emergency” is how a sufferer of cluster headaches recently described it to me). This brings me to…
  6. The apparent illusory nature of value in light of meditative deconstruction of felt-senses. As you put it elsewhere: “Introspectively – Meditators with deep experience typically report all concepts are delusion. This is realized in a very direct experiential way.” Here I am ambivalent, though my default response is to make sense of the meditation-induced feeling that “value is illusory” as itself an operation on one’s conscious topology that makes the value quality of experience get diminished or plugged out. Meditation masters will say things like “if you observe the pain very carefully, if you slice it into 30 tiny fragments per second, you will realize that the suffering you experience from it is an illusory construction”. And this kind of language itself is, IMO, liable to give off the illusion that the pain was illusory to begin with. But here I disagree. We don’t say that people who take a strong opioid to reduce acute pain are “gaining insight into the fundamental nature of pain” and that’s “why they stop experiencing it”. Rather, we understand that the strong opioid changes the neurological conditions in such a way that the quality of the pain itself is modified, which results in a duller, “asymbolic“, non-propagating, well-confined discomfort. In other words, strong opioids reduce the value-quality of pain by locally changing the nature of pain rather than by bringing about a realization of its ultimate nature. The same with meditation. The strongest difference here, I think, would be that opioids are preventing the spatial propagation of pain “symmetry breaking structures” across one’s experience and thus “confine pain to a small spatial location”, whereas meditation does something different that is better described as confining the pain to a small temporal region. This is hard to explain in full, and it will require us to fully formalize how the subjective arrow of time is constructed and how pain qualia can make copies across it. [By noting the pain very quickly one is, I believe, preventing it from building up and then having “secondary pain” which emerges from the cymatic resonance of the various lingering echoes of pain across one’s entire “pseudo-time arrow of experience”.] Sorry if this sounds like word salad, I am happy to unpack these concepts if needed, while also admitting that we are in early stages of the theoretical and empirical development.
  7. Finally, I will concede that the common sense view of “reference” is very deluded on many levels. The very notion that we can refer to an experience with another experience, that we can encode the properties of a different moment of experience in one’s current moment of experience, that we can talk about the “real world” or its “objective ethical values” or “moral duty” is very far from sensical in the final analysis. Reference is very tricky, and I think that a full understanding of consciousness will do some severe violence to our common sense in this area. That, however, is different from the self-disclosing properties of experience such as red qualia and pain qualia. You can do away with all of common sense reference while retaining a grounded understanding that “the constituents of the world are qualia values and their local binding relationships”. In turn, I do think that we can aim to do a decently good job at re-building from the ground up a good approximation of our common sense understanding of the world using “meaning grounded in qualia”, and once we do that we will be in a solid foundation (as opposed to the, admittedly very messy, quasi-delusional character of thoughts as they exist today). Needless to say, this may also need us to change our state of consciousness. “Someday we will have thoughts like sunsets” – David Pearce.

 

Open Individualism and Antinatalism: If God could be killed, it’d be dead already

Abstract

Personal identity views (closed, empty, open) serve in philosophy the role that conservation laws play in physics. They recast difficult problems in solvable terms, and by expanding our horizon of understanding, they likewise allow us to conceive of new classes of problems. In this context, we posit that philosophy of personal identity is relevant in the realm of ethics by helping us address age-old questions like whether being born is good or bad. We further explore the intersection between philosophy of personal identity and philosophy of time, and discuss the ethical implications of antinatalism in a tenseless open individualist “block-time” universe.

Introduction

Learning physics, we often find wide-reaching concepts that simplify many problems by using an underlying principle. A good example of this is the law of conservation of energy. Take for example the following high-school physics problem:

An object that weighs X kilograms falls from a height of Y meters on a planet without an atmosphere and a gravity of Zg. Calculate the velocity with which this object will hit the ground.

One could approach this problem by using Newton’s laws of motion and differentiating the distance traveled by the object as a function of time and then obtaining the velocity of the object at the time it has fallen Y meters.

Alternatively, you could simply note that given that energy is conserved, all of the potential energy of the object at a height of X meters will be transformed into kinetic energy at 0 height. Thus the velocity of the object is equivalent to this amount, and the problem is easier to solve.

Once one has learned “the trick” one starts to see many other problems differently. In turn, grasping these deep invariants opens up new horizons; while many problems that seemed impossible can be solved using these principles, it also allows you to ask new questions, which opens up new problems that cannot be solved with those principles alone.

Does this ever happen in philosophy? Perhaps entire classes of difficult problems in philosophy may become trivial (or at least tractable) once one grasps powerful principles. Such is the case, I would claim, of transcending common-sense views of personal identity.

Personal Identity: Closed, Empty, Open

In Ontological Qualia I discussed three core views about personal identity. For those who have not encountered these concepts, I recommend reading that article for an expanded discussion.

In brief:

  1. Closed Individualism: You start existing when you are born, and stop when you die.
  2. Empty Individualism: You exist as a “time-slice” or “moment of experience.”
  3. Open Individualism: There is only one subject of experience, who is everyone.

This slideshow requires JavaScript.

Most people are Closed Individualists; this is the default common sense view for good evolutionary reasons. But what grounds are there to believe in this view? Intuitively, the fact that you will wake up in “your body” tomorrow is obvious and needs no justification. However, explaining why this is the case in a clear way requires formalizing a wide range of concepts such as causality, continuity, memory, and physical laws. And when one tries to do so one will generally find a number of barriers that will prevent one from making a solid case for Closed Individualism.

As an example line of argument, one could argue that what defines you as an individual is your set of memories, and since the person who will wake up in your body tomorrow is the only human being with access to your current memories then you must be it. And while this may seem to work on the surface, a close inspection reveals otherwise. In particular, all of the following facts work against it: (1) memory is a constructive process and every time you remember something you remember it (slightly) differently, (2) memories are unreliable and do not always work at will (e.g. false memories), (3) it is unclear what happens if you copy all of your memories into someone else (do you become that person?), (4) how many memories can you swap with someone until you become a different person?, and so on. Here the more detailed questions one asks, the more ad-hoc modifications of the theory are needed. In the end, one is left with what appears to be just a set of conventional rules to determine whether two persons are the same for practical purposes. But it does not seem to carve nature at its joints; you’d be merely over-fitting the problem.

The same happens with most Closed Individualist accounts. You need to define what the identity carrier is, and after doing so one can identify situations in which identity is not well-defined given that identity carrier (memory, causality, shared matter, etc.).

But for both Open and Empty Individualism, identity is well-defined for any being in the universe. Either all are the same, or all are different. Critics might say that this is a trivial and uninteresting point, perhaps even just definitional. Closed Individualism seems sufficiently arbitrary, however, that questioning it is warranted, and once one does so it is reasonable to start the search for alternatives by taking a look at the trivial cases in which either all or none of the beings are the same.

More so, there are many arguments in favor of these views. They indeed solve and usefully reformulate a range of philosophical problems when applied diligently. I would argue that they play a role in philosophy that is similar to that of conservation of energy in physics. The energy conservation law has been empirically tested to extremely high levels of precision, which is something which we will have to do without in the realm of philosophy. Instead, we shall rely on powerful philosophical insights. And in addition, they make a lot of problems tractable and offer a powerful lens to interpret core difficulties in the field.

Open and Empty Individualism either solve or have bearings on: Decision theory, utilitarianism, fission/fusion, mind-uploading and mind-melding, panpsychism, etc. For now, let us focus on…

Antinatalism

Antinatalism is a philosophical view that posits that, all considered, it is better not to be born. Many philosophers could be adequately described as antinatalists, but perhaps the most widely recognized proponent is David Benatar. A key argument Benatar considers is that there might be an asymmetry between pleasure and pain. Granted, he would say, experiencing pleasure is good, and experiencing suffering is bad. But while “the absence of pain is good, even if that good is not enjoyed by anyone”, we also have that “the absence of pleasure is not bad unless there is somebody for whom this absence is a deprivation.” Thus, while being born can give rise to both good and bad, not being born can only be good.

Contrary to popular perception, antinatalists are not more selfish or amoral than others. On the contrary, their willingness to “bite the bullet” of a counter-intuitive but logically defensible argument is a sign of being willing to face social disapproval for a good cause. But along with the stereotype, it is generally true that antinatalists are temperamentally depressive. This, of course, does not invalidate their arguments. If anything, sometimes a degree of depressive realism is essential to arrive at truly sober views in philosophy. But it shouldn’t be a surprise to learn that either experiencing or having experienced suffering in the past predispose people to vehemently argue for the importance of its elimination. Having a direct acquaintance with the self-disclosing nastiness of suffering does give one a broader evidential base for commenting on the matter of pain and pleasure.

Antinatalism and Closed Individualism

Interestingly, Benatar’s argument, and those of many antinatalists, rely implicitly on personal identity background assumptions. In particular, antinatalism is usually framed in a way that assumes Closed Individualism.

The idea that a “person can be harmed by coming into existence” is developed within a conceptual framework in which the inhabitants of the universe are narrative beings. These beings have both spatial and temporal extension. And they also have the property that had the conditions previous to their birth been different, they might not have existed. But how many possible beings are there? How genetically or environmentally different do they need to be to be different beings? What happens if two beings merge? Or if they converge towards the same exact physical configuration over time?

 

This conceptual framework has counter-intuitive implications when taken to the extreme. For example, the amount of harm you do involves how many people you allow to be born, rather than how many years of suffering you prevented.

For the sake of the argument, imagine that you have control over a sentient-AI-enabled virtual environment in which you can make beings start existing and stop existing. Say that you create two beings, A and B, who are different in morally irrelevant ways (e.g. one likes blue more than red, but on average they both end up suffering and delighting in their experience with the same intensity). With Empty Individualism, you would consider giving A 20 years of life and not creating B vs. giving A and B 10 years of life each to be morally equivalent. But with Closed Individualism you would rightly worry that these two scenarios are completely different. By giving years of life to both A and B (any amount of life!) you have doubled the number of subjects who are affected by your decisions. If the gulf of individuality between two persons is infinite, as Closed Individualism would have it, by creating both A and B you have created two parallel realities, and that has an ontological effect on existence. It’s a big deal. Perhaps a way to put it succinctly would be: God considers much more carefully the question of whether to create a person who will live only 70 years versus whether to add a million years of life to an angel who has already lived for a very long time. Creating an entirely new soul is not to be taken lightly (incidentally, this may cast the pro-choice/pro-life debate in an entirely new light).

Thus, antinatalism is usually framed in a way that assumes Closed Individualism. The idea that a being is (possibly) harmed by coming into existence casts the possible solutions in terms of whether one should allow animals (or beings) to be born. But if one were to take an Open or Empty Individualist point of view, the question becomes entirely different. Namely, what kind of experiences should we allow to exist in the future…

Antinatalism and Empty Individualism

I think that the strongest case for antinatalism comes from a take on personal identity that is different than the implicit default (Closed Individualism). If you assume Empty Individualism, in particular, reality starts to seem a lot more horrible than you had imagined. Consider how in Empty Individualism fundamental entities exist as “moments of experience” rather than narrative streams. Therefore, every time that an animal suffers, what is actually happening is that some moments of experience get to have their whole existence in pain and suffering. In this light, one stops seeing people who suffer terrible happenings (e.g. kidney stones, schizophrenia, etc.) as people who are unlucky, and instead one sees their brains as experience machines capable of creating beings whose entire existence is extremely negative.

With Empty Individualism there is simply no way to “make it up to someone” for having had a bad experience in the past. Thus, out of compassion for the extremely negative moments of experience, one could argue that it might be reasonable to try to avoid this whole business of life altogether. That said, this imperative does not come from the asymmetry between pain and pleasure Benetar talks about (which as we saw implicitly requires Closed Individualism). In Empty Individualism it does not make sense to say that someone has been brought into existence. So antinatalism gets justified from a different angle, albeit one that might be even more powerful.

In my assessment, the mere possibility of Empty Individualism is a good reason to take antinatalism very seriously.

It is worth noting that the combination of Empty Individualism and Antinatalism has been (implicitly) discussed by Thomas Metzinger (cf. Benevolent Artificial Anti-Natalism (BAAN)) and FRI‘s Brian Tomasik.

Antinatalism and Open Individualism

Here is a Reddit post and then a comment on a related thread (by the same author) worth reading on this subject (indeed these artifacts motivated me to write the article you are currently reading):

There’s an interesting theory of personal existence making the rounds lately called Open Individualism. See herehere, and here. Basically, it claims that consciousness is like a single person in a huge interconnected library. One floor of the library contains all of your life’s experiences, and the other floors contain the experiences of others. Consciousness wanders the aisles, and each time he picks up a book he experiences whatever moment of life is recorded in it as if he were living it. Then he moves onto the next one (or any other random one on any floor) and experiences that one. In essence, the “experiencer” of all experience everywhere, across all conscious beings, is just one numerically identical subject. It only seems like we are each separate “experiencers” because it can only experience one perspective at a time, just like I can only experience one moment of my own life at a time. In actuality, we’re all the same person.

 

Anyway, there’s no evidence for this, but it solves a lot of philosophical problems apparently, and in any case there’s no evidence for the opposing view either because it’s all speculative philosophy.

 

But if this were true, and when I’m done living the life of this particular person, I will go on to live every other life from its internal perspective, it has some implications for antinatalism. All suffering is essentially experienced by the same subject, just through the lens of many different brains. There would be no substantial difference between three people suffering and three thousand people suffering, assuming their experiences don’t leave any impact or residue on the singular consciousness that experiences them. Even if all conscious life on earth were to end, there are still likely innumerable conscious beings elsewhere in the universe, and if Open Individualism is correct, I’ll just move on to experiencing those lives. And since I can re-experience them an infinite number of times, it makes no difference how many there are. In fact, even if I just experienced the same life over and over again ten thousand times, it wouldn’t be any different from experiencing ten thousand different lives in succession, as far as suffering is concerned.

 

The only way to end the experience of suffering would be to gradually elevate all conscious beings to a state of near-constant happiness through technology, or exterminate every conscious being like the Flood from the Halo series of games. But the second option couldn’t guarantee that life wouldn’t arise again in some other corner of the multiverse, and when it did, I’d be right there again as the conscious experiencer of whatever suffering it would endure.

 

I find myself drawn to Open Individualism. It’s not mysticism, it’s not a Big Soul or something we all merge with, it’s just a new way of conceptualizing what it feels like to be a person from the inside. Yet, it has these moral implications that I can’t seem to resolve. I welcome any input.

 

– “Open individualism and antinatalism” by Reddit user CrumbledFingers in r/antinatalism (March 23, 2017)

And on a different thread:

I have thought a lot about the implications of open individualism (which I will refer to as “universalism” from here on, as that’s the name coined by its earliest proponent, Arnold Zuboff) for antinatalism. In short, I think it has two major implications, one of which you mention. The first, as you say, is that freedom from conscious life is impossible. This is bad, but not as bad as it would be if I were aware of it from every perspective. As it stands, at least on Earth, only a small number of people have any inkling that they are me. So, it is not like experiencing the multitude of conscious events taking place across reality is any kind of burden that accumulates over time; from the perspective of each isolated nervous system, it will always appear that whatever is being experienced is the only thing I am experiencing. In this way, the fact that I am never truly unconscious does not have the same sting as it would to, for example, an insomniac, who is also never unconscious but must experience the constant wakefulness from one integrated perspective all the time.

 

It’s like being told that I will suffer total irreversible amnesia at some point in my future; while I can still expect to be the person that experiences all the confusion and anxiety of total amnesia when it happens, I must also acknowledge that the residue of any pains I would have experienced beforehand would be erased. Much of what makes consciousness a losing game is the persistence of stresses. Universalism doesn’t imply that any stresses will carry over between the nervous systems of individual beings, so the reality of my situation is by no means as nightmarish as eternal life in a single body (although, if there exists an immortal being somewhere in the universe, I am currently experiencing the nightmare of its life).

 

The second implication of this view for antinatalism is that one of the worst things about coming into existence, namely death, is placed in quite a different context. According to the ordinary view (sometimes called “closed” individualism), death permanently ends the conscious existence of an alienated self. Universalism says there is no alienated self that is annihilated upon the death of any particular mind. There are just moments of conscious experience that occur in various substrates across space and time, and I am the subject of all such experiences. Thus, the encroaching wall of perpetual darkness and silence that is usually an object of dread becomes less of a problem for those who have realized that they are me. Of course, this realization is not built into most people’s psychology and has to be learned, reasoned out, intellectually grasped. This is why procreation is still immoral, because even though I will not cease to exist when any specific organism dies, from the perspective of each one I will almost certainly believe otherwise, and that will always be a source of deep suffering for me. The fewer instances of this existential dread, however misplaced they may be, the better.

 

This is why it’s important to make more people understand the position of universalism/open individualism. In the future, long after the person typing this sentence has perished, my well-being will depend in large part on having the knowledge that I am every person. The earlier in each life I come to that understanding, and thus diminish the fear of dying, the better off I will be. Naturally, this project decreases in potential impact if conscious life is abundant in the universe, and in response to that problem I concede there is probably little hope, unless there are beings elsewhere in the universe that have comprehended who they are and are taking the same steps in their spheres of influence. My dream is that intelligent life eventually either snuffs itself out or discovers how to connect many nervous systems together, which would demonstrate to every connected mind that it has always belonged to one subject, has always been me, but I don’t have any reason to assume this is even possible on a physical level.

 

So, I suppose you are mostly right about one thing: there are no lucky ones that escape the badness of life’s worst agonies, either by virtue of a privileged upbringing or an instantaneous and painless demise. They and the less fortunate ones are all equally me. Yet, the horror of going through their experiences is mitigated somewhat in the details.

 

– A comment by CrumbledFingers in the Reddit post “Antinatalism and Open individualism“, also in r/antinatalism (March 12, 2017)

Our brain tries to make sense of metaphysical questions in wet-ware that shares computational space with a lot of adaptive survival programs. It does not matter if you have thick barriers (cf. thick and thin boundaries of the mind), the way you assess the value of situations as a human will tend to over-focus on whatever would allow you to go up Maslow’s hierarchy of needs (or, more cynically, achieve great feats as a testament to signal your genetic-fitness). Our motivational architecture is implemented in such a way that it is very good at handling questions like how to find food when you are hungry and how to play social games in a way that impresses others and leaves a social mark. Our brains utilize many heuristics based on personhood and narrative-streams when exploring the desirability of present options. We are people, and our brains are adapted to solve people problems. Not, as it turns out, general problems involving the entire state-space of possible conscious experiences.

Prandium Interruptus

Our brains render our inner world-simulation with flavors and textures of qualia to suit their evolutionary needs. This, in turn, impairs our ability to aptly represent scenarios that go beyond the range of normal human experiences. Let me illustrate this point with the following thought experiment:

Would you rather (a) have a 1-hour meal, or (b) have the same meal but at the half-hour point be instantly transformed into a simple, amnesic, and blank experience of perfectly neutral hedonic value that lasts ten quintillion years, and after that extremely long time of neither-happiness-nor-suffering ends, then resume the rest of the meal as if nothing had happened, with no memory of that long neutral period?

According to most utilitarian calculi these two scenarios ought to be perfectly equivalent. In both cases the total amount of positive and negative qualia is the same (the full duration of the meal) and the only difference is that the latter also contains a large amount of neutral experience too. Whether classical or negative, utilitarians should consider these experiences equivalent since they contain the same amount of pleasure and pain (note: some other ethical frameworks do distinguish between these cases, such as average and market utilitarianism).

Intuitively, however, (a) seems a lot better than (b). One imagines oneself having an awfully long experience, bored out of one’s mind, just wanting it to end, get it over with, and get back to enjoying the nice meal. But the very premise of the thought experiment presupposes that one will not be bored during that period of time, nor will one be wishing it to be over, or anything of the sort, considering that all of those are mental states of negative quality and the experience is supposed to be neutral.

Now this is of course a completely crazy thought experiment. Or is it?

The One-Electron View

In 1940 John Wheeler proposed to Richard Feynman the idea that all of reality is made of a single electron moving backwards and forwards in time, interfering with itself. This view has come to be regarded as the One-Electron Universe. Under Open Individualism, that one electron is you. From every single moment of experience to the next, you may have experienced life as a sextillion different animals, been 10^32 fleeting macroscropic entangled particles, and gotten stuck as a single non-interacting electron in the inter-galactic medium for googols of subjective years. Of course you will not remember any of this, because your memories, and indeed all of your motivational architecture and anticipation programs, are embedded in the brain you are instantiating right now. From that point of view, there is absolutely no trace of the experiences you had during this hiatus.

The above way of describing the one-electron view is still just an approximation. In order to see it fully, we also need to address the fact that there is no “natural” order to all of these different experiences. Every way of factorizing it and describing the history of the universe as “this happened before this happened” and “this, now that” could be equally inapplicable from the point of view of fundamental reality.

Philosophy of Time

17496270_10208752190872647_1451187529_n-640x340

Presentism is the view that only the present moment is real. The future and the past are just conceptual constructs useful to navigate the world, but not actual places that exist. The “past exists as footprints”, in a matter of speaking. “Footprints of the past” are just strangely-shaped information-containing regions of the present, including your memories. Likewise, the “future” is unrealized: a helpful abstraction which evolution gave us to survive in this world.

On the other hand, eternalism treats the future and the past as always-actualized always-real landscapes of reality. Every point in space-time is equally real. Physically, this view tends to be brought up in connection with the theory of relativity, where frame-invariant descriptions of the space-time continuum have no absolute present line. For a compelling physical case, see the Rietdijk-Putnam argument.

Eternalism has been explored in literature and spirituality extensively. To name a few artifacts: The EggHindu and Buddhist philosophy, the videos of Bob Sanders (cf. The Gap in Time, The Complexity of Time), the essays of Philip K. Dick and J. L. Borges, the poetry of T. S. Eliot, the fiction of Kurt Vonnegut Jr (TimequakeSlaughterhouse Five, etc.), and the graphic novels of Alan Moore, such as Watchmen:

Let me know in the comments if you know of any other work of fiction that explores this theme. In particular, I would love to assemble a comprehensive list of literature that explores Open Individualism and Eternalism.

Personal Identity and Eternalism

For the time being (no pun intended), let us assume that Eternalism is correct. How do Eternalism and personal identity interact? Doctor Manhattan in the above images (taken from Watchmen) exemplifies what it would be like to be a Closed Individualist Eternalist. He seems to be aware of his entire timeline at once, yet recognizes his unique identity apart from others. That said, as explained above, Closed Individualism is a distinctly unphysical theory of identity. One would thus expect of Doctor Manhattan, given his physically-grounded understanding of reality, to espouse a different theory of identity.

A philosophy that pairs Empty Individualism with Eternalism is the stuff of nightmares. Not only would we have, as with Empty Individualism alone, that some beings happen to exist entirely as beings of pain. We would also have that such unfortunate moments of experience are stuck in time. Like insects in amber, their expressions of horror and their urgency to run away from pain and suffering are forever crystallized in their corresponding spatiotemporal coordinates. I personally find this view paralyzing and sickening, though I am aware that such a reaction is not adaptive for the abolitionist project. Namely, even if “Eternalism + Empty Individualism” is a true account of reality, one ought not to be so frightened by it that one becomes incapable of working towards preventing future suffering. In this light, I adopt the attitude of “hope for the best, plan for the worst”.

Lastly, if Open Individualism and Eternalism are both true (as I suspect is the case), we would be in for what amounts to an incredibly trippy picture of reality. We are all one timeless spatiotemporal crystal. But why does this eternal crystal -who is everyone- exist? Here the one-electron view and the question “why does anything exist?” could both be simultaneously addressed with a single logico-physical principle. Namely, that the sum-total of existence contains no information to speak of. This is what David Pearce calls “Zero Ontology” (see: 1, 2, 3, 4). What you and I are, in the final analysis, is the necessary implication of there being no information; we are all a singular pattern of self-interference whose ultimate nature amounts to a dimensionless unit-sphere in Hilbert space. But this is a story for another post.

On a more grounded note, Scientific American recently ran an article that could be placed in this category of Open Individualism and Eternalism. In it the authors argue that the physical signatures of multiple-personality disorder, which explain the absence of phenomenal binding between alters that share the same brain, could be extended to explain why reality is both one and yet appears as the many. We are, in this view, all alters of the universe.

Personal Identity X Philosophy of Time X Antinatalism

Sober, scientifically grounded, and philosophically rigorous accounts of the awfulness of reality are rare. On the one hand, temperamentally happy individuals are more likely to think about the possibilities of heaven that lie ahead of us, and their heightened positive mood will likewise make them more likely to report on their findings. Temperamental depressives, on the other hand, may both investigate reality with less motivated reasoning than the euthymic and also be less likely to report on the results due to their subdued mood (“why even try? why even bother to write about it?”). Suffering in the Multiverse by David Pearce is a notable exception to this pattern. David’s essay highlights that if Eternalism is true together with Empty Individualism, there are vast regions of the multiverse filled with suffering that we can simply do nothing about (“Everett Hell Branches”). Taken together with a negative utilitarian ethic, this represents a calamity of (quite literally) astronomical proportions. And, sadly, there simply is no off-button to the multiverse as a whole. The suffering is/has/will always be there. And this means that the best we can do is to avoid the suffering of those beings in our forward-light cone (a drop relative to the size of the ocean of existence). The only hope left is to find a loop-hole in quantum mechanics that allows us to cross into other Everett branches of the multiverse and launch cosmic rescue missions. A counsel of despair or a rational prospect? Only time will tell.

Another key author that explores the intersection of these views is Mario Montano (see: Eternalism and Its Ethical Implications and The Savior Imperative).

A key point that both of these authors make is that however nasty reality might be, ethical antinatalists and negative utilitarians shouldn’t hold their breath about the possibility that reality can be destroyed. In Open Individualism plus Eternalism, the light of consciousness (perhaps what some might call the secular version of God) simply is, everywhere and eternally. If reality could be destroyed, such destruction is certainly limited to our forward light-cone. And unlike Closed Individualist accounts, it is not possible to help anyone by preventing their birth; the one subject of existence has already been born, and will never be unborn, so to speak.

Nor should ethical antinatalists and negative utilitarians think that avoiding having kids is in any way contributing to the cause of reducing suffering. It is reasonable to assume that the personality traits of agreeableness (specifically care and compassion), openness to experience, and high levels of systematizing intelligence are all over-represented among antinatalists. Insofar as these traits are needed to build a good future, antinatalists should in fact be some of the people who reproduce the most. Mario Montano says:

Hanson calls the era we live in the “dream time” since it’s evolutionarily unusual for any species to be wealthy enough to have any values beyond “survive and reproduce.” However, from an anthropic perspective in infinite dimensional Hilbert space, you won’t have any values beyond “survive and reproduce.” The you which survives will not be the one with exotic values of radical compassion for all existence that caused you to commit peaceful suicide. That memetic stream weeded himself out and your consciousness is cast to a different narrative orbit which wants to survive and reproduce his mind. Eventually. Wanting is, more often than not, a precondition for successfully attaining the object of want.

Physicalism Implies Existence Never Dies

Also, from the same essay:

Anti-natalists full of weeping benignity are literally not successful replicators. The Will to Power is life itself. It is consciousness itself. And it will be, when a superintelligent coercive singleton swallows superclusters of baryonic matter and then spreads them as the flaming word into the unconverted future light cone.

[…]

You eventually love existence. Because if you don’t, something which does swallows you, and it is that which survives.

I would argue that the above reasoning is not entirely correct in the large scheme of things*, but it is certainly applicable in the context of human-like minds and agents. See also: David Pearce’s similar criticisms to antinatalism as a policy.

This should underscore the fact that in its current guise, antinatalism is completely self-limiting. Worryingly, one could imagine an organized contingent of antinatalists conducting research on how to destroy life as efficiently as possible. Antinatalists are generally very smart, and if Eliezer Yudkowsky‘s claim that “every 18 months the minimum IQ necessary to destroy the world drops by one point” is true, we may be in for some trouble. Both Pearce’s, Montano’s, and my take is that even if something akin to negative utilitarianism is the case, we should still pursue the goal of diminishing suffering in as peaceful of a way as it is possible. The risk of trying to painlessly destroy the world and failing to do so might turn out to be ethically catastrophic. A much better bet would be, we claim, to work towards the elimination of suffering by developing commercially successful hedonic recalibration technology. This also has the benefit that both depressives and life-lovers will want to team up with you; indeed, the promise of super-human bliss can be extraordinarily motivating to people who already lead happy lives, whereas the prospect of achieving “at best nothing” sounds stale and uninviting (if not outright antagonistic) to them.

An Evolutionary Environment Set Up For Success

If we want to create a world free from suffering, we will have to contend with the fact that suffering is adaptive in certain environments. The solution here is to avoid such environments, and foster ecosystems of mind that give an evolutionary advantage to the super-happy. More so, we already have the basic ingredients to do so. In Wireheading Done Right I discussed how, right now, the economy is based on trading three core goods: (1) survival tools, (2) power, and (3) information about the state-space of consciousness. Thankfully, the world right now is populated by humans who largely choose to spend their extra income on fun rather than on trips to the sperm bank. In other words, people are willing to trade some of their expected reproductive success for good experiences. This is good because it allows the existence of an economy of information about the state-space of consciousness, and thus creates an evolutionary advantage for caring about consciousness and being good at navigating its state-space. But for this to be sustainable, we will need to find the way to make positive valence gradients (i.e. gradients of bliss) both economically useful and power-granting. Otherwise, I would argue, the part of the economy that is dedicated to trading information about the state-space of consciousness is bound to be displaced by the other two (i.e. survival and power). For a more detailed discussion on these questions see: Consciousness vs. Pure Replicators.

12565637_1182612875090077_9123676868545012453_n

Can we make the benevolent exploration of the state-space of consciousness evolutionarily advantageous?

In conclusion, to close down hell (to the extent that is physically possible), we need to take advantage of the resources and opportunities granted to us by merely living in Hanson’s “dream time” (cf. Age of Spandrels). This includes the fact that right now people are willing to spend money on new experiences (especially if novel and containing positive valence), and the fact that philosophy of personal identity can still persuade people to work towards the wellbeing of all sentient beings. In particular, scientifically-grounded arguments in favor of both Open and Empty Individualism weaken people’s sense of self and make them more receptive to care about others, regardless of their genetic relatedness. On its natural course, however, this tendency may ultimately be removed by natural selection: if those who are immune to philosophy are more likely to maximize their inclusive fitness, humanity may devolve into philosophical deafness. The solution here is to identify the ways in which philosophical clarity can help us overcome coordination problems, highlight natural ethical Schelling points, and ultimately allow us to summon a benevolent super-organism to carry forward the abolition of as much suffering as is physically possible.

And only once we have done everything in our power to close down hell in all of its guises, will we be able to enjoy the rest of our forward light-cone in good conscience. Till then, us ethically-minded folks shall relentlessly work on building universe-sized fire-extinguishers to put out the fire of Hell.


* This is for several reasons: (1) phenomenal binding is not epiphenomenal, (2) the most optimal computational valence gradients are not necessarily located on the positive side, sadly, and (3) wanting, liking, and learning are possible to disentangle.

Person-moment affecting views

by Katja Grace (source)

[Epistemic status: sloppy thoughts not informed by the literature. Hoping actual population ethicists might show up and correct me or point me to whoever has already thought about something like this better.]

Person-affecting views say that when you are summing up the value in different possible worlds, you should ignore people who only exist in one of those worlds. This is based on something like the following intuitions:

  1. World A can only be better than world B insofar as it is better for someone.
  2. World A can’t be better than world B for Alice, if Alice exists in world A but not world B.

The further-fact view says that after learning all physical facts about Alice and Alice’—such as whether Alice’ was the physical result of Alice waiting for five seconds, or is a brain upload of Alice, or is what came out of a replicating machine on Mars after Alice walked in on Earth, or remembers being Alice—there is still a further meaningful question of whether Alice and Alice’ are the same person.

I take the further-fact view to be wrong (or at least Derek Parfit does, and I think we agree the differences between Derek Parfit and I have been overstated). Thinking that the further-fact view is wrong seems to be a common position among intellectuals (e.g. 87% among philosophers).

If the further-fact view is wrong, then the what we have is a whole lot of different person-moments, with various relationships to one another, which for pragmatic reasons we like to group into clusters called ‘people’. There are different ways we could define the people, and no real answer to which definition is right. This works out pretty well in our world, but you can imagine other worlds (or futures of our world) where the clusters are much more ambiguous, and different definitions of ‘person’ make a big difference, or where the concept is not actually useful.

Person-affecting views seem to make pretty central use of the concept ‘person’. If we don’t accept the further-fact view, and do want to accept a person-affecting view, what would that mean? I can think of several options:

  1. How good different worlds are depends strongly on which definition of ‘person’ you choose (which person moments you choose to cluster together), but this is a somewhat arbitrary pragmatic choice
  2. There is some correct definition of ‘person’ for the purpose of ethics (i.e. there is some relation between person moments that makes different person moments in the future ethically relevant by virtue of having that connection to a present person moment)
  3. Different person-moments are more or less closely connected in ways, and a person-affecting view should actually have a sliding scale of importance for different person-moments

Before considering these options, I want to revisit the second reason for adopting a person-affecting view: If Alice exists in world A and not in world B, then Alice can’t be made better off by world A existing rather than world B. Whether this premise is true seems to depend on how ‘a world being better for Alice’ works. Some things we might measure would go one way, and some would go the other. For instance, we could imagine it being analogous to:

  1. Alice painting more paintings. If Alice painted three paintings in world A, and doesn’t exist in world B, I think most people would say that Alice painted more paintings in world A than in world B. And more clearly, that world A has more paintings than world B, even if we insist that a world can’t have more paintings without somebody in particular having painted more paintings. Relatedly, there are many things people do where the sentence ‘If Alice didn’t exist, she wouldn’t have X’.
  2. Alice having painted more paintings per year. If Alice painted one painting every thirty years in world A, and didn’t exist in world B, in world B the number of paintings per year is undefined, and so incomparable to ‘one per thirty years’.

Suppose that person-affecting view advocates are right, and the worth of one’s life is more like 2). You just can’t compare the worth of Alice’s life in two worlds where she only exists in one of them. Then can you compare person-moments? What if the same ‘person’ exists in two possible worlds, but consists of different person-moments?

Compare world A and world C, which both contain Alice, but in world C Alice makes different choices as a teenager, and becomes a fighter pilot instead of a computer scientist. It turns out that she is not well suited to it, and finds piloting pretty unsatisfying. If Alice_t1A is different from Alice_t1C, can we say that world A is better than world C, in virtue of Alice’s experiences? Each relevant person-moment only exists in one of the worlds, so how can they benefit?

I see several possible responses:

  1. No we can’t. We should have person-moment affecting views.
  2. Things can’t be better or worse for person-moments, only for entire people, holistically across their lives, so the question is meaningless. (Or relatedly, how good a thing is for a person is not a function of how good it is for their person-moments, and it is how good it is for the person that matters).
  3. Yes, there is some difference between people and person moments, which means that person-moments can benefit without existing in worlds that they are benefitting relative to, but people cannot.

The second possibility seems to involve accepting the second view above: that there is some correct definition of ‘person’ that is larger than a person moment, and fundamental to ethics – something like the further-fact view. This sounds kind of bad to me. And the third view doesn’t seem very tempting without some idea of an actual difference between persons and person-moments.

So maybe the person-moment affecting view looks most promising. Let us review what it would have to look like. For one thing, the only comparable person moments are the ones that are the same. And since they are the same, there is no point bringing about one instead of the other. So there is never reason to bring about a person-moment for its own benefit. Which sounds like it might really limit the things that are worth intentionally doing. Isn’t making myself happy in three seconds just bringing about a happy person moment rather than a different sad person moment?

Is everything just equally good on this view? I don’t think so, as long as you are something like a preference utilitarian: person-moments can have preferences over other person-moments. Suppose that Alice_t0A and Alice_t0C are the same, and Alice_t1A and Alice_t1C are different. And suppose that Alice_t0 wants Alice_t1 to be a computer scientist. Then world A is better than world C for Alice_t0, and so better overall. That is, person-moments can benefit from things, as long as they don’t know at the time that they have benefited.

I think an interesting  feature of this view is that all value seems to come from meddling preferences. It is never directly good that there is joy in the world for instance, it is just good because somebody wants somebody else to experience joy, and that desire was satisfied. If they had instead wished for a future person-moment to be tortured, and this was granted, then this world would apparently be just as good.

So, things that are never directly valuable in this world:

  • Joy
  • Someone getting what they want and also knowing about it
  • Anything that isn’t a meddling preference

On the upside, since person-moments often care about future person-moments within the same person, we do perhaps get back to something closer to the original person-affecting view. There is often reason to bring about or benefit a person moment for the benefit of previous person moments in the history of the same person, who for instance wants to ‘live a long and happy life’. My guess after thinking about this very briefly is that in practice it would end up looking like the ‘moderate’ person-affecting views, in which people who currently exist get more weight than people who will be brought into existence, but not infinitely more weight. People who exist now mostly want to continue existing, and to have good lives in the future, and they care less, but some, about different people in the future.

So, if you want to accept a person-affecting view and not a further-fact view, the options seem to me to be something like these:

  1. Person-moments can benefit without having an otherworldly counterpart, even though people cannot. Which is to say, only person-moments that are part of the same ‘person’ in different worlds can benefit from their existence. ‘Person’ here is either an arbitrary pragmatic definition choice, or some more fundamental ethically relevant version of the concept that we could perhaps discover.
  2. Benefits accrue to persons, not person-moments. In particular, benefits to persons are not a function of the benefits to their constituent person-moments. Where ‘person’ is again either a somewhat arbitrary choice of definition, or a more fundamental concept.
  3. A sliding scale of ethical relevance of different person-moments, based on how narrow a definition of ‘person’ unites them with any currently existing person-moments. Along with some story about why, given that you can apparently compare all of them, you are still weighting some less, on grounds that they are incomparable.
  4. Person-moment affecting views

None of these sound very good to me, but nor do person-affecting views in general, so maybe I’m the wrong audience. I had thought person-moment affecting views were almost a reductio, but a close friend says he thought they were the obvious reasonable view, so I am curious to hear others’ takes.



An interesting thing to point out here is that what Katja describes as the further-fact view is terminologically equivalent to what we here call Closed Individualism (cf. Ontological Qualia). This is the common-sense view that you start existing when you are born and stop existing when you die (which also has soul-based variants with possible pre-birth and post-death existence). This view is not very philosophically tenable because it presupposes that there is an enduring metaphysical ego distinct for every person. And yet, the vast majority of people still hold strongly to Closed Individualism. In some sense, in the article Katja tries to rescue the common-sense aspect of Closed Individualism in the context of ethics. That is, by trying to steel-man the common-sense notion that people (rather than moments of experience) are the relevant units for morality while also negating further-fact views, you provide reasons to keep using Closed Individualism as an intuition-pump in ethics (if only for pragmatic reasons). In general, I consider this kind of discussions to be a very fruitful endeavor as they approach ethics by touching upon the key parameters that matter fundamentally: identity, value, and counterfactuals.

As you may gather from pieces such as Wireheading Done Right and The Universal Plot, at Qualia Computing we tend to think the most coherent ethical system arises when we take as a premise that the relevant moral agents are “moments of experience”. Contra Person-affecting views, we don’t think it is meaningless to say that a given world is better than another one if not everyone in the first world is also in the second one. On the contrary – it really does not matter who lives in a given world. What matters is the raw subjective quality of the experiences in such worlds. If it is meaningless to ask “who is experiencing Alice’s experiences now?” once you know all the physical facts, then moral weight must be encoded in such physical facts alone. In turn, it could certainly happen then that the narrative aspect of an experience may turn out to be irrelevant for determining the intrinsic value of a given experience. People’s self-narratives may certainly have important instrumental uses, but at their core they don’t make it to the list of things that intrinsically matter (unlike, say, avoiding suffering).

A helpful philosophical move that we have found adds a lot of clarity here is to analyze the problem in terms of Open Individualism. That is, assume that we are all one consciousness and take it from there. If so, then the probability that you are a given person would be weighted by the amount of consciousness (or number of moments of experience, depending) that such person experiences throughout his or her life. You are everyone in this view, but you can only be each person one at a time from their own limited points of view. So there is a sensible way of weighting the importance of each person, and this is a function of the amount of time you spend being him or her (and normalize by the amount of consciousness that person experiences, in case that is variable across individuals).

If consciousness emerges victorious in its war against pure replicators, then it would make sense that the main theory of identity people would hold by default would be Open Individualism. After all, it is only Open Individualism that aligns individual incentives and the total wellbeing of all moments of experience throughout the universe.

That said, in principle, it could turn out that Open Individualism is not needed to maximize conscious value – that while it may be useful instrumentally to align the existing living intelligences towards a common consciousness-centric goal (e.g. eliminating suffering, building a harmonic society, etc.), in the long run we may find that ontological qualia (the aspect of our experience that we use to represent the nature of reality, including our beliefs about personal identity) has no intrinsic value. Why bother experiencing heaven in the form of a mixture of 95% bliss and 5% ‘a sense of knowing that we are all one’, if you can instead just experience 100% pure bliss?

At the ethical limit, anything that is not perfectly blissful might end up being thought of as a distraction from the cosmic telos of universal wellbeing.

No-Self vs. True Self

This is one of those questions that tends to arise when Hinduism or Christianity come in contact with Buddhism. However, perhaps it should arise more when Buddhism is thinking about itself. I include this discussion here because it addresses some points that are useful for later and previous discussions. True Self and no-self are actually talking about the same thing, just from different perspectives. Each can be useful, but each is an extreme. Truly, the truth is a Middle Way between these and is indescribable, but I will try to explain it anyway in the hope that it may support actual practice. It may seem odd to put a chapter that deals with the fruits of insight practices in the middle of descriptions of the samatha jhanas, but hopefully when you read the next chapter you will understand why it falls where it does.

For all you intellectuals out there, the way in which this chapter is most likely to support practice is to be completely incomprehensible and thus useless. Ironically, I have tried to make this chapter very clear, and in doing so have crafted a mess of paradoxes. In one of his plays, Shakespeare puts philosophers on par with lawyers. In terms of insight practice, a lawyer who is terrible at insight practices but tries to do them anyway is vastly superior to a world-class philosopher who is merely an intellectual master of this theory but practices not at all.

Remember that the spiritual life is something you do and hopefully understand but not some doctrine to believe. Those of you who are interested in the formal Buddhist dogmatic anti-dogma should check out the particularly profound Sutta 1, “The Root of All Things“, in The Middle Length Discourses of the Buddha, as well as Sutta 1, “The Supreme Net (What the Teaching is Not)“, in The Long Discourses of the Buddha.

Again, realize that all of this language is basically useless in the end and prone to not making much sense. Only examination of our reality will help us to actually directly understand this, but it will not be in a way accessible to the rational mind. Nothing in the content of our thoughts can really explain the experience of the understanding I am about to point to, though there is something in the direct experience of those thoughts that might reveal it. Everything that I am about to try to explain here can become a great entangling net of useless views without direct insight.

Many of the juvenile and tedious disputes between the various insight traditions result from fixation on these concepts and inappropriate adherence to only one side of these apparent paradoxes. Not surprisingly, these disputes between insight traditions generally arise from those with little or no insight. One clear mark of the development of true insight is that these paradoxes lose their power to confuse and obscure. They become tools for balanced inquiry and instruction, beautiful poetry, intimations of the heart of the spiritual life and of one’s own direct and non-conceptual experience of it.

No-self teachings directly counter the sense that there is a separate watcher, and that this watcher is an “I” that is in control, observing reality or subject to the tribulations of the world. Truly, this is a useful illusion to counter. However, if misunderstood, this teaching can produce a shadow side that reeks of nihilism, disengagement with life and denial. People can get all fixated on eliminating a “self”, when the emphasis is supposed to be on the words “separate” and “permanent,” as well as on the illusion that is being created. A better way to say this would be, “stopping the process of mentally creating the illusion of a separate self from sensations that are inherently non-dual, utterly transient and thus empty of any separate, permanent self.”

Even if you get extremely enlightened, you will still be here from a conventional point of view, but you will also be just an interdependent and intimate part of this utterly transient universe, just as you actually always have been. The huge and yet subtle difference is that this will be known directly and clearly. The language “eliminating your ego” is similarly misunderstood most of the time.

You see, there are physical phenomena and mental phenomena, as well as the “consciousness” or mental echo of these, which is also in the category of mental phenomena. These are just phenomena, and all phenomena are not permanent, separate self, as they all change and are all intimately interdependent. They are simply “aware,” i.e. manifest, where they are without any observer of them at all. The boundaries that seem to differentiate self from not-self are arbitrary and conceptual, i.e. not the true nature of things. Said another way, reality is intimately interdependent and non-dual, like a great ocean.

There is also “awareness”, but awareness is not a thing or localized in a particular place, so to even say “there is also awareness” is already a tremendous problem, as it implies separateness and existence where none can be found. To be really philosophically correct about it, borrowing heavily from Nagarjuna, awareness cannot be said to fit any of the descriptions: that it exists, that it does not exist, that it both exists and does not exist, that it neither exists nor doesn’t exist. Just so, in truth, it cannot be said that: we are awareness, that we are not awareness, that we are both awareness and not awareness, or even that we are neither awareness nor not awareness. We could go through the same pattern with whether or not phenomena are intrinsically luminous.

For the sake of discussion, and in keeping with standard Buddhist thought, awareness is permanent and unchanging. It is also said that, “All things arise from it, and all things return to it,” though again this implies a false certainty about something which is actually impenetrably mysterious and mixing the concept of infinite potential with awareness is a notoriously dangerous business. We could call it “God”, “Nirvana”, “The Tao”, “The Void”, “Allah”, “Krishna”, “Intrinsic Luminosity”, “Buddha Nature”, “Buddha”, “Bubba” or just “awareness” as long as we realize the above caveats, especially that it is not a thing or localized in any particular place and has no definable qualities. Awareness is sometimes conceptualized as pervading all of this while not being all of this, and sometimes conceptualized as being inherent in all of this while not being anything in particular. Neither is quite true, though both perspectives can be useful.

If you find yourself adopting any fixed idea about what we are calling “awareness” here, try also adopting its logical opposite to try to achieve some sense of direct inquisitive paradoxical imbalance that shakes fixed views about this stuff and points to something beyond these limited concepts. This is incredibly useful advice for dealing with all teachings about “Ultimate Reality.” I would also recommend looking into the true nature of the sensations that make up philosophical speculation and all sensations of questioning.

While phenomena are in flux from their arising to their passing, there is awareness of them. Thus, awareness is not these objects, as it is not a thing, nor is it separate from these objects as there would be no experience if this were so. By examining our reality just as it is, we may come to understand this.

Further, phenomena do not exist in the sense of abiding in a fixed way for any length of time, and thus are utterly transitory, and yet the laws that govern the functioning of this utter transience hold. That phenomena do not exist does not mean that there is not a reality, but that this reality is completely inconstant, except for awareness, which is not a thing. This makes no sense to the rational mind, but that is how it is with this stuff.

One teaching that comes out of the Theravada that can be helpful is that there are Three Ultimate Dharmas or ultimate aspects of reality: materiality (the sensations of the first five sense doors), mentality (all mental sensations) and Nirvana (though they would call it “Nibbana,” which is the Pali equivalent of the Sanskrit). In short, this is actually it, and “that” which is beyond this is also it. Notice that “awareness” is definitely not on this list. It might be conceptualized as being all three (from a True Self point of view), or quickly discarded as being a useless concept that solidifies a sense of a separate or localized “watcher” (from the no-self point of view).

Buddhism also contains a strangely large number of True Self teachings, though if you told most Buddhists this they would give you a good scolding. Many of these have their origins in Hindu Vedanta and Hindu Tantra. All the talk of Buddha Nature, the Bodhisattva Vow, and that sort of thing are True Self teachings. True Self teachings point out that this “awareness” is “who we are,” but isn’t a thing, so it is not self. They also point out that we actually are all these phenomena, rather than all of these phenomena being seen as something observed and thus not self, which they are also as they are utterly transient and not awareness. This teaching can help practitioners actually examine their reality just as it is and sort of “inhabit it” in a honest and realistic way, or it can cause them to cling to things as “self” if they misunderstand this teaching. I will try again…

You see, as all phenomena are observed, they cannot possibly be the observer. Thus, the observer, which is awareness and not any of the phenomena pretending to be it, cannot possibly be a phenomenon and thus is not localized and doesn’t exist. This is no-self. However, all of these phenomena are actually us from the point of view of non-duality and interconnectedness, as the illusion of duality is just an illusion. When the illusion of duality permanently collapses in final awakening, all that is left is all of these phenomena, which is True Self, i.e. the lack of a separate self and thus just all of this as it is. Remember, however, that no phenomena abide for even an instant, and so are empty of permanent abiding and thus of stable existence.

This all brings me to one of my favorite words, “non-dual,” a word that means that both duality and unity fail to clearly describe ultimate reality. As “awareness” is in some way separate from and unaffected by phenomena, we can’t say that that unity is the true answer. Unitive experiences arise out of strong concentration and can easily fool people into thinking they are the final answer. They are not.

That said, it is because “awareness” is not a phenomena, thing or localized in any place that you can’t say that duality is true. A duality implies something on both sides, an observer and an observed. However, there is no phenomenal observer, so duality does not hold up under careful investigation. Until we have a lot of fundamental insight, the sense that duality is true can be very compelling and can cause all sorts of trouble. We extrapolate false dualities from sensations until we are very highly enlightened.

Thus, the word “non-dual” is an inherently paradoxical term, one that confounds reason and even our current experience of reality. If we accept the working hypothesis that non-duality is true, then we will be able to continue to reject both unitive and dualistic experiences as the true answer and continue to work towards awakening. This is probably the most practical application of discussions of no-self and True Self.

No-self and True Self are really just two sides of the same coin. There is a great little poem by one Kalu Rinpoche that goes something like:

We live in illusion
And the appearance of things.
There is a reality:
We are that reality.
When you understand this,
You will see that you are nothing.
And, being nothing,
You are everything.
That is all.

There are many fine poems on similar themes presented in Sogyal Rinpoche’s The Tiberan Book of Living and Dying. It is because we are all of this that compassionate action for all beings and ourselves is so important. To truly understand this moment is to truly understand both, which is the Middle Way between these two extremes (see Nisargadatta’s I Am That for a very down-to-earth discussion of these issues). While only insight practices will accomplish this, there are some concentration attainments (the last four jhanas or Formless Realms) that can really help put things in proper perspective, though they do not directly cause deep insight and awakening unless the true nature of the sensations that make them up is understood.

Quote from Mastering the Core Teachings of the Buddha: An Unusually Hardcore Dharma Book by Daniel M. Ingram (pages 182-187)


Cf. Ontological Qualia: The Future of Personal Identity (for a discussion about how the experience of identity might change in the future), The Manhattan Project of Consciousness (for speculative writing about the potential social benefits of combining bliss technologies with qualia that make us experience oneness in a reliably way), and this essay about Burning Man (which introduces the concept of the Goldilocks Zone of Oneness which refers to the experience of “feeling neither one with everything nor ontologically separate from the rest of reality” in which wonderful things like love, gratitude, and meaning can thrive).