Cartoon Epistemology by Steven Lehar (2003)

From: http://slehar.com/wwwRel//cartoonepist/cartoonepist.html

See also:


Part I: Something Very Strange!

Did you ever notice? There is something ver-ry strange about this world of ours.

Really? Like what?

Do you ever feel like you are trapped in some kind of bubble? I mean look—the sky looks like a dome over my head. Is that really the shape of the sky? And did you notice something really funny?

No what? Tell me!

Did you ever notice that things that are far away look smaller? And things that are nearby look bigger! Do you realize how strange that is?

Strange? That’s not strange at all! Thats just perspective, just like it happens in a camera. In a photograph farther things look smaller, too.

Yes, but the perspective in a camera is projected onto the flat sheet of film. There is no mystery in that kind of perspective, it is simply a projection from a 3-D world through a focal point onto a 2-D surface. And in your eyeball the retina is like the film.

So?

But take a look at this street here. Is this street the picture on your retina?

No, that’s the street itself where the light comes from that makes the image on my retina.

But then how come things in the distance look smaller? Perspective is something that happens in your eye, not out in the world! In the real street things in the distance are not actually smaller, all the houses are exactly the same size. It is only on your retina that the farther ones appear smaller. And the image on your retina is only a flat 2-D image. This world out here is 3-D, but it has perspective. So is it the world itself? Or is it the image on the retina?

Well, it’s both. Light from the world makes a picture in your eye that lets you see the things out in the world. What’s so hard about that?

Ok then if this is the world itself, then why is everything bent around like a reflection in a Christmas bulb?

I don’t see anything bent. What are you talking about?

Well, take a look at this. See the two sides of this street? They are straight and parallel as far as the eye can see.

So?

But LOOK! Those straight parallel sides also MEET AT A POINT! RIGHT THERE! Can you SEE it?

Well they LOOK like they meet at a point. But they don’t really!

And if you turn around and look behind you, they meet at a point back there too!

So? I don’t get it.

So this street that we are standing on is shaped like the rind of a melon slice with two curved sides that meet at a point at either end. And those end points are at eye level, even though the street is under our feet.

Ok, vision is not perfect. So what?

So what? So we are living in a scale model, and the scale of the model shrinks progressively with depth, just like a museum diorama, or a theatre set. And at the back plane the scale shrinks to zero, at least in the depth dimension, where everything beyond a certain distance appears flat, as if painted on the dome of the sky.

But it only looks that way. It’s an illusion. We know that the world isn’t really warped like that.

Yes but HOW do we “know”? We “know” by using a warped reference scale to judge the objective size of things in the warped subjective world. If we measure distances, and even straightness itself, using this warped reference grid, we can see that all the houses are the same height and width and depth, and that they are really all straight and vertical, not warped and bulgy as they appear.

I don’t know if I see anything warped at all! Looks perfectly straight to me!

Of course! Relative to your warped reference scale! And take a look at what happens when you walk down the road. Things from far away expand outwards and get bigger and bigger until you pass them, and then they shrink back down again to a tiny little dot before they disappear altogether!

Hmmm, I suppose it is a bit like some kind of bubble.

And the strange thing is that the part where the world is biggest, is always right where you are standing. Now look- you stay here and let me take a few paces. See? now my world is the biggest here where I stand, but your world is biggest over there where you stand. Either the world is a very elastic place, or you and I are looking at different bubbles.

But how can that be? Wouldn’t our bubbles collide?

Either that or they’re all part of one big elastic bubble. Except you’d think that I could see the distortion of your bubble from mine, and vice-versa.

No, that can’t be right either.

Well then the only other possibility is that we each have our own private bubble, and we can never see into anyone else’s bubble.

But how can that be? You can see quite clearly that we are both standing right here in the same space.

It only seems that way because in my bubble I have a picture of me here and you there, while in your bubble you have a picture of you there and me here. The pictures in our two bubbles are so similar that we assume that we are in the same space.

But then where are these bubbles? What are they made of? And whatever happened to the real world that we know exists independent of our experience of it? Where did it go? Does it not even exist? Is everything just a hallucination or lucid dream?

Of course it exists! Otherwise there would be nothing to keep the picture in your bubble synchronized with the picture in mine. Can’t you see? We are both looking at this same house from different perspectives, so there must be a real house there to be the common cause of both perspectives. We are each seeing our own virtual-reality replica of the real world, each from our own unique perspective.

But where is this virtual-reality picture? What is it made of?

If we know anything about neurophysiology, we know that it must be in the brain. The brain is the organ of conscious experience. Mind is nothing more than the operation of the physical brain.

So you’re telling me that everything I see around me is actually inside my head? How can that be? My head is right here, and all that is out there!

You cannot see the external world directly. You can only see it through your private conscious experience of it. So this world you see around you is the picture in your brain. In other words beyond the dome of the sky above, and beyond the solid earth underfoot, is the inner surface of your true physical skull.

Impossible! I don’t buy it! I don’t care what you say, I know this is the world, not just a picture in my head! I don’t see any curvature, the world is just plain straight. You must be crazy!


Part II: There is nothing strange at all!

Ok, then you tell me. How does vision work?

Well, first of all, this here is the real world, not some kind of image in anybody’s head. And there is nothing bent or bulgy about it, the world is perfectly straight! The sides of the road don’t converge, they only seem to converge. And they never do actually meet, if you look very carefully. And farther things are not really smaller, they only appear to be smaller. It’s an illusion.

Now light from the world enters your eye where it makes an image on your retina. The retina sends an electrical signal up the optic nerve that generates electrical activation in the visual cortex.

And as the cortex lights up electrically, you see the world around you.

Is the electrical activity in the cortex shaped like a street with houses under a domed sky?

No! Neurons chatter away in the brain in a pattern that is nothing like the shape of the world you see.

Then where does the shape of visual experience come from? Where is the picture that we see?

The world! It comes from the world! The shapes and colors you see around you are shapes and colors of the world, not patterns in your brain!

But what if you are having a lucid dream, or a hallucination? Then you are seeing shapes and colors that are not in the world. Where are the shapes and colors of the hallucinated scene?

They’re in your head of course! But in your head they do not have the shapes and colors you see. In your brain they are just a bunch of neurons chattering to each other.

But then where do the shapes and colors you see come from? Why don’t you just see the shapes of the neurons and their activations? What is it that turns the patterns of electrical activity into the patterns that we experience?

The pattern of electrical activation in your brain during a hallucination takes the same shape as it would during a normal perception of the hallucinated scene. So the shape and color that you experience are the shape and color that the world would have if you were perceiving it instead of hallucinating it. They are the shape and color of the world, not of your brain! Even if that world is imaginary!

But then what determines which neurons, or patterns of activity, represent which experienced shapes? What is the mapping between the shape of the neurons, or patterns of activation, and the shapes that we experience?

The mapping is learned from experience! One neuron, or neuron assembly, learns to fire whenever you see a house, while other neurons, or assemblies, learn to respond to windows, doors, and roofs, for example. The collective pattern of activation of all of these neurons together corresponds to our experience of the external scene.

And in dreams and hallucinations, the same constellation of neural activations produces the same kind of visual experience as a perception of the corresponding real scene!

But where is this experience located? We can see quite plainly that it is a spatial structure. But where is that structure? What is it made of and where is it located?

It’s out in the WORLD of course! Experience is right out here in the world where we observe the world to be!

So, let me get this straight. Experience is a spatial structure, as we can plainly see. And your experience occurs as the result of some kind of activation in your brain. But your experience is not in the brain, it is out in the world, although it is caused by electrical events in your physical brain.

Yeah, thats right!

So let’s say I had a switch that could turn your brain on and off like a light bulb. Are you telling me that every time I turned it on, the experience would appear out there, but when I turn it off, then the experience disappears? Is experience like a beam of light from a flashlight that is projected outward from the brain? What is that experience made of? What is its substance?

Its not made of ANYTHING! It is just EXPERIENCE! It doesn’t really EXIST in a physical sense! Nothing is actually projected, it is only EXPERIENCED to be projected!

Doesn’t exist? This here? The fabric of experience? Doesn’t exist??? All this here is really a bunch of scrambled neurons firing in my brain? I don’t care what you say, I see a spatial structure in experience, and if I see it, I know that it exists!


Part III: It’s all in your head!

Ok then, how do YOU explain visual processing? How does vision work in the brain?

Well, first and foremost, it is plainly evident that visual experience is a spatial structure, and it is produced by the brain. So unless we find compelling evidence to the contrary, that structure must be located in the brain! How the brain constructs spatial pictures remains a deep dark mystery. But that it does so is an observational fact.

So whether it is by coherent oscillations, standing waves, or some kind of Fourier code, somehow the tissue of the brain must be capable of generating three-dimensional moving images as rich and complex as this image of me that you see here!

Now of course the volumetric image may be warped and distorted in the brain…

…while still being a volumetric representation.

But as long as its connectivity, or functional architecture, is similarly warped and distorted, the warped image encodes the same volumetric information as its undistorted counterpart.

And apparently the volumetric image can even be fragmented into separate modules specialized for processing color, motion, binocular disparity, etc., while still producing a coherent, unified experience.

But whatever else we know about the visual representation, one thing is plainly obvious by inspection: the representational strategy used in the brain is an analogical one. In other words objects and surfaces are represented in perception not by an abstract symbolic code, nor by the activation of individual cells, or cell assemblies. Instead, objects are represented in the brain by constructing full spatial effigies of them that appear to us for all the world like the objects themselves.

Vision is televisual. It lets us see the remote external world through the medium of an internal replica of it.

But who is the viewer of this internal theatre of the mind? For whose benefit is this internal performance produced? Is it the little man at the center who sees this scene? But then how does HE see? Is there yet another smaller man inside that little man’s head, and so on to an infinite regress of observers within observers?

No, of course not! It only goes in one level! Take a look! What do you see inside your phenomenal head? I see nothing! It is an empty void! There is no infinite series of heads within heads, there is just a fuzzy brown emptiness with nothing inside, that opens to the world through the eyes like two open windows.

No, the little man at the center of our world of experience is not the observer of the internal scene. It is merely an object, made of the same substance as is the rest of the perceived scene, because that scene would be incomplete without a replica of our own body in the world. But the perceptual homunculus is more than a mere replica of the physical body. It is a computational mechanism constructed by the brain to help it control the body.

If the representational principle behind visual perception involves an explicit volumetric spatial model of external reality, then sensorimotor function might also be best implemented in the form of an explicit volumetric model of the body, like a wooden marionette, with hinges and ball joints at elbows and shoulders just like the real body that it represents by analogy.

But what makes the internal marionette a meaningful model of the larger body that it represents, is that it is coupled to the larger body somehow, so that the posture of the model always exactly mirrors the posture of the real body that it represents.

(Follow alternate argument path)

Motor control is tele-motive, like a virtual-reality body glove, electronically coupled to a remote android body that automatically replicates its posture.

Except in perception that android body is not remote, but surrounding, like a body glove suspended in a control room which is located inside the head of the giant android body that it controls.

And projected into that control room is a volumetric colored replica of the surrounding environment constructed on the basis of sensory input.

As the controller cavorts about in this synthetic reality, the larger android body cavorts in the external world, giving the controller the impression that he is interacting with the external world directly…

…although there are also times when it becomes abundantly clear that our view of the world is not direct or unmediated.

Now despite appearances to the contrary, the little man at the center of our world of experience is not the viewer or ‘experiencer’ of the surrounding virtual world. The body-image homunculus is just the interface between perceptual and motor function, expressed as an explicit model of the body in an explicit model of surrounding space.

Motor computation takes the form of spatial field-like forces in that space, that bend the body-image homunculus into different postures…

…and those postures are then mirrored in amplified form by the larger android body.

But the coupling works both ways. Forces from the external world are also communicated back in to the model world. For example gravity drags the android body downwards against the ground that pushes upward…

…and these forces are also replicated in the model environment where we perceive them as actual forces in what we believe to be the external world. The perceived force of synthetic gravity makes sense of the persistent resistance felt to upward motion of the body, just as the resistance of the ground underfoot to penetration is perceived as an upward force of rigid support.

We resist forces like the pull of gravity by constructing an equal and opposite force of levity, that raises our body-image homunculus upward against the downward pull while pushing downward against the ground. This symmetrically opposed motor force field is a spatial thought that appears under voluntary control, and it acts on the body image as a motor command. And as the motor force field moves the body-image homunculus…

…the larger android body moves in perfect synchrony with it, pushing harder against the ground to raise itself upward against gravity.

We move our hand by simply willing it to move in any direction we choose, and that will is itself a force that acts on our body-image hand which moves in response to the willed force. The larger android hand moves in synchrony, as if it were itself under the influence of a larger external force pulling it in the willed direction.

And our desire to move through the world is expressed as a larger force field that pulls on the entire body-image homunculus. But since the homunculus is anchored to the center of the control room, the only way he can advance forward…

…is by pushing the world backwards until his destination comes to where he is, or so it seems perceptually, resulting in real progress of the physical body through the physical world.

Perceived objects in the perceived environment also exert attractive and aversive forces that influence the movement of our body through the world. So whenever we perceive an object to be attractive, it automatically exudes an attractive field that tends to pull our body image towards it in perceptual space.

The larger body moves through external space exactly as if it were responding directly to field forces from the external world, except that these forces actually exist only in the internal perceptual world.

(Follow alternate argument path backwards)

And when we perceive something to be aversive, it automatically exudes a repulsive field that pushes our body away from it in perceptual space.

The subjective impression of being attracted or repelled by attractive and repulsive stimuli is not only metaphorically true, but this subjective impression is a veridical manifestation of the mental mechanism that drives our motor response.

So in a sense it is the sensorimotor homunculus for whom the internal world of perception is constructed. But that homunculus does not “see” the internal scene…

…but rather the attractive and aversive features recognized in the scene exert forces on the body image homunculus, which in turn result in body motion through the environment.

Once we recognize the world of experience for the internal representation that it is, the computational strategy used in motor control becomes clearly evident by inspection.

So you are saying that vision, proprioception, somatosensory, and motor function are all part of a single integrated analog control system? But brain scientists have discovered a fragmented architecture in the cortex with separate areas specialized for processing visual, auditory, and motor information. How would the activations in all those separate cortical areas integrate to produce a single unified experience? What is the binding force that binds them all together?

They are bound together by bi-directional causal connections, just like the dual controls of an airplane. If the student pulls the stick back while the instructor is pushing it forward, both control sticks move in perfect synchrony because they are connected, exactly as if the two pilots were actually pulling and pushing on the same stick.

In the same way, the different sensory/motor marionettes in different cortical areas are all coupled to each other, as well as to the larger external body, so that forces applied to one marionette are automatically and immediately transmitted in parallel to all the rest.

And if different forces are applied to different marionettes, the resultant body motion is exactly as if all of those forces were acting on a single virtual marionette.

So visual data is expressed in the visual representation, body posture is represented in the proprioceptive representation, and motor planning is computed in a global motor planning space. But all are coupled to form a single visual/proprioceptive/motor space, which is the space that we experience.

Each of these diverse representations are expressed in their own specialized slab of cortical tissue, although these areas are tightly coupled so as to form a single integrated computational module.

That is the most incredible hypothesis I have ever heard! Tell me honestly now—do you really believe this? Or are you being deliberately provocative for the sake of argument?

Look—if you once just accept the fact that this world we see around us is a picture in our head, all the rest of it follows by inspection! Besides, the alternative view, that we can somehow see the world directly, bypassing the sensory machinery in the eye and brain, is just plain magic!

So it comes down to a choice between two incredible hypotheses. Either you believe the incredible notion that your skull is larger than the entire perceived universe…

…or you believe the absurd notion that we can experience things outside of ourselves directly, beyond the sensory surface! One of these two incredible hypotheses simply must be true, and the other is just plain wrong!

Which one do YOU find less incredible?


Send comments and opinions to: Steve Lehar (slehar _+_a t _+_ g m a i l + c o m). Interesting comments or logical objections will be posted HERE.

For an in-depth philosophical presentation of the epistemological debate, see my on-line paper:

Gestalt Isomorphism and the Primacy of the Subjective Conscious Experience

or read my book

The World In Your Head: A Gestalt view of the mechanism of conscious experience

Steve Lehar

Digital Computers Will Remain Unconscious Until They Recruit Physical Fields for Holistic Computing Using Well-Defined Topological Boundaries

[Epistemic Status: written off the top of my head, thought about it for over a decade]

What do we desire for a theory of consciousness?

We want it to explain why and how the structure of our experience is computationally relevant. Why would nature bother to wire, not only information per se, but our experiences in richly structured ways that seem to track task-relevant computation (though at times in elusive ways)?

I think we can derive an explanation here. It is both very theoretically satisfying and literally mind-bending. This allows us to rule out vast classes of computing systems as having no more than computationally trivial conscious experiences.

TL;DR: We have richly textured bound experiences precisely because the boundaries that individuate us also allow us to act as individuals in many ways. This individual behavior can reflect features of the state of the entire organism in energy-efficient ways. Evolution can recruit this individual, yet holistic, behavior due to its computational advantages. We think that the boundary might be the result of topological segmentation in physical fields.


Marr’s Levels of Analysis and the Being/Form Boundary

One lens we can use to analyze the possibility of sentience in systems is this conceptual boundary between “being” and “form”. Here “being” refers to the interiority of things- their intrinsic likeness. “Form” on the other hand refers to how they appear from the outside. Where you place the being/form boundary influences how you make sense of the world around you. One factor that seems to be at play for where you place the being/form boundary is your implicit background assumptions about consciousness. In particular, how you think of consciousness in relation to Marr’s levels of analysis:

  • If you locate consciousness at the computational (or behavioral) level, then the being/form boundary might be computation/behavior. In other words, sentience simply is the performance of certain functions in certain contexts.
  • If you locate it at the algorithmic level, then the being/form boundary might become algorithm/computation. Meaning that what matters for the inside is the algorithm, whereas the outside (the form) is the function the algorithm produces.
  • And if you locate it at the implementation level, you will find that you identify being with specific physical situations (such as phases of matter and energy) and form as the algorithms that they can instantiate. In turn, the being/form boundary looks like crystals & bubbles & knots of matter and energy vs. how they can be used from the outside to perform functions for each other.

How you approach the question of whether a given chatbot is sentient will drastically depend on where you place the being/form boundary.


Many arguments against the sentience of particular computer systems are based on algorithmic inadequacy. This, for example, takes the form of choosing a current computational theory of mind (e.g. global workspace theory) and checking if the algorithm at play has the bare bones you’d expect a mind to have. This is a meaningful kind of analysis. And if you locate the being/form boundary at the algorithmic level then this is the only kind of analysis that seems to make sense.

What stops people from making successful arguments concerning the implementation level of analysis is confusion about the function for consciousness. So which physical systems are or aren’t conscious seems to be inevitably an epiphenomenalist construct. Meaning that drawing boundaries around systems with specific functions is an inherently fuzzy activity and any criteria we choose for whether a system is performing a certain function will be at best a matter of degree (and opinion).

The way of thinking about phenomenal boundaries I’m presenting in this post will escape this trap.

But before we get there, it’s important to point out the usefulness of reasoning about the algorithmic layer:

Algorithmic Structuring as a Constraint

I think that most people who believe that digital sentience is possible will concede that at least in some situations The Chinese Room is not conscious. The extreme example is when the content of the Chinese Room turns out to be literally a lookup table. Here a simple algorithmic concern is sufficient to rule out its sentience: a lookup table does not have an inner state! And what you do, from the point of view of its inner workings, is the same no matter if you relabel which input goes with what output. Whatever is inscribed in the lookup table (with however many replies and responses as part of the next query) is not something that the lookup table structurally has access to! The lookup table is, in an algorithmic sense, blind to what it is and what it does*. It has no mirror into itself.

Algorithmic considerations are important. To not be a lookup table, we must have at least some internal representations. We must consider constraints on “meaningful experience”, such as probably having at least some of, or something analogous to: a decent number of working memory slots (and types), a good size of visual field, resolution of color in terms of Just Noticeable Differences, and so on. If your algorithm doesn’t even try to “render” its knowledge in some information-rich format, then it may lack the internal representations needed to really “understand”. Put another way: imagine that your experience is like a Holodeck. Ask the question of what is the lower bound on the computational throughput of each sensory modality and their interrelationships. Then see if the algorithm you think can “understand” has internal representations of that kind at all.

Steel-manning algorithmic concerns involves taking a hard look at the number of degrees of freedom of our inner world-simulation (in e.g. free-wheeling hallucinations) and making sure that there are implicit or explicit internal representations with roughly similar computational horsepower as those sensory channels.

I think that this is actually an easy constraint to meet relative to the challenge of actually creating sentient machines. But it’s a bare minimum. You can’t let yourself be fooled by a lookup table.

In practice, the AI researchers will just care about metrics like accuracy, meaning that they will use algorithmic systems with complex internal representations like ours only if it computationally pays off to do so! (Hanson in Age of EM makes the bet it that it is worth simulating a whole high-performing human’s experience; Scott points out we’d all be on super-amphetamines). Me? I’m extremely skeptical that our current mindstates are algorithmically (or even thermodynamically!) optimal for maximally efficient work. But even if normal human consciousness or anything remotely like it was such a global optimum that any other big computational task routes around to it as an instrumental goal, I still think we would need to check if the algorithm does in fact create adequate internal representations before we assign sentience to it.

Thankfully I don’t think we need to go there. I think that the most crucial consideration is that we can rule out a huge class of computing systems ever being conscious by identifying implementation-level constraints for bound experiences. Forget about the algorithmic level altogether for a moment. If your computing system cannot build a bound experience from the bottom up in such a way that it has meaningful holistic behavior, then no matter what you program into it, you will only have “mind dust” at best.

What We Want: Meaningful Boundaries

In order to solve the boundary problem we want to find “natural” boundaries in the world to scaffold off of those. We take on the starting assumption that the universe is a gigantic “field of consciousness” and the question of how atoms come together to form experiences becomes how this field becomes individuated into experiences like ours. So we need to find out how boundaries arise in this field. But these are not just any boundary, but boundaries that are objective, frame-invariant, causally-significant, and computationally-useful. That is, boundaries you can do things with. Boundaries that explain why we are individuals and why creating individual bound experiences was evolutionarily adaptive; not only why it is merely possible but also advantageous.

My claim is that boundaries with such properties are possible, and indeed might explain a wide range of puzzles in psychology and neuroscience. The full conceptually satisfying explanation results from considering two interrelated claims and understanding what they entail together. The two interrelated claims are:

(1) Topological boundaries are frame-invariant and objective features of physics

(2) Such boundaries are causally significant and offer potential computational benefits

I think that these two claims combined have the potential to explain the phenomenal binding/boundary problem (of course assuming you are on board with the universe being a field of consciousness). They also explain why evolution was even capable of recruiting bound experiences for anything. Namely, that the same mechanism that logically entails individuation (topological boundaries) also has mathematical features useful for computation (examples given below). Our individual perspectives on the cosmos are the result of such individuality being a wrinkle in consciousness (so to speak) having non-trivial computational power.

In technical terms, I argue that a satisfactory solution to the boundary problem (1) avoids strong emergence, (2) sidesteps the hard problem of consciousness, (3) prevents the complication of epiphenomenalism, and (4) is compatible with the modern scientific world picture.

And the technical reason why topological segmentation provides the solution is that with it: (1) no strong emergence is required because behavioral holism is only weakly emergent on the laws of physics, (2) we sidestep the hard problem via panpsychism, (3) phenomenal binding is not epiphenomenal because the topological segments have holistic causal effects (such that evolution would have a reason to select for them), and (4) we build on top of the laws of physics rather than introduce new clauses to account for what happens in the nervous system. In this post you’ll get a general walkthrough of the solution. The fully rigorous, step by step, line of argumentation will be presented elsewhere. Please see the video for the detailed breakdown of alternative solutions to the binding/boundary problem and why they don’t work.

Holistic (Field) Computing

A very important move that we can make in order to explore this space is to ask ourselves if the way we think about a concept is overly restrictive. In the case of computation, I would claim that the concept is either applied extremely vaguely or that making it rigorous makes its application so narrow that it loses relevance. In the former case we have the tendency for people to equate consciousness with computation in a very abstract level (such as “resource gathering” and “making predictions” and “learning from mistakes”). In the latter we have cases where computation is defined in terms of computable functions. The conceptual mistake to avoid is to think that just because you can compute a function with a Turing machine, that therefore you are creating the same inner (bound or not) physical states along the way. And while yes, it would be possible to approximate the field behavior we will discuss below with a Turing machine, it would be computationally inefficient (as it would need to simulate a massively parallel system) and lack the bound inner states (and their computational speedups) needed for sentience.

The (conceptual engineering) move I’m suggesting we make is to first of all enrich our conception of computation. To notice that we’ve lived with an impoverished notion all along.

I suggest that our conception of computation needs to be broad enough to include bound states as possible meaningful inputs, internal steps and representations, and outputs. This enriched conception of computation would be capable of making sense of computing systems that work with very unusual inputs and outputs. For instance, it has no problem thinking of a computer that takes as input chaotic superfluid helium and returns soap bubble clusters as outputs. The reason to use such exotic medium is not to add extra steps, but in fact to remove extra steps by letting physics do the hard work for you.

(source)

To illustrate just one example of what you can do with this enriched paradigm of computing I am trying to present to you, let’s now consider the hidden computational power of soap films. Say that you want to connect three poles with a wire. And you want to minimize how much wire you use. One option is to use trigonometry and linear algebra, another one is to use numerical simulations. But an elegant alternative is to create a model of the poles between two parallel planes and then submerge the structure in soapy water.

Letting the natural energy-minimizing property of soap bubbles find the shortest connection between three poles is an interesting way of performing a computation. It is uniquely adapted to the problem without needing tweaks or adjustments – the self-organizing principle will work the same (within reason) wherever you place the poles. You are deriving computational power from physics in a very customized way that nonetheless requires no tuning or external memory. And it’s all done simply by each point of the surface wanting to minimize its tension. Any non-minimal configuration will have potential energy, which then gets transformed into kinetic energy and makes it wobble, and as it wobbles it radiates out its excess energy until it reaches a configuration where it doesn’t wobble anymore. So you have to make the solution of your problem precisely a non-wobbly state!

In this way of thinking about computation, an intrinsic part of the question about what kind of thing a computation is will depend on what physical processes were utilized to implement it. In essence, we can (and I think should) enrich our very conception of computation to include what kind of internal bound states the system is utilizing, and the extent to which the holistic physical effects of such inner states are computationally trivial or significant.

We can call this paradigm of computing “Holistic Computing”.

From Soap Bubbles to ISING-Solvers Meeting Schedulers Implemented with Lasers

Let’s make a huge jump from soap water-based computation. A much more general case that is nonetheless in the same family as using soap bubbles for compute, is having a way to efficiently solve the ISING problem. In particular, having an analog physics-based annealing method in this case comes with unique computational benefits: it turns out that non-linear optics can do this very efficiently. You are in a certain way using the universe’s very frustration with the problem (don’t worry I don’t think it suffers) to get it solved. Here is an amazing recent example: Ising Machines: Non-Von Neumann Computing with Nonlinear Optics – Alireza Marandi – 6/7/2019 (presented at Caltech).

The person who introduces Marandi in the video above is Kwabena Boahen, with whom I had the honor to take his course at Stanford (and play with the neurogrid!). Back in 2012 something like the neurogrid seemed like the obvious path to AGI. Today, ironically, people imagine scaling transformers is all you need. Tomorrow, we’ll recognize the importance of holistic field behavior and the boundary problem.

One way to get there on the computer science front will be by first demonstrating a niche set of applications where e.g. non-linear optics ISING solvers vastly outperform GPUs for energy minimization tasks in random graphs. But as the unique computational benefits become better understood, we will sooner or later switch from thinking about how to solve our particular problem, to thinking about how we can cast our particular problem as an ISING/energy minima problem so that physics solves the problem for us. It’s like having a powerful computer but it only speaks a very specific alien language. If you can translate your problem into its own terms, it’ll solve it at lightning speed. If you can’t, it will be completely useless.

Intelligence: Collecting and Applying Self-Organizing Principles

This takes us to the question of whether general intelligence is possible without switching to a Holistic Computing paradigm. Can you have generally intelligent (digital) chatbots? In some senses, yes. In perhaps the most significant sense, no.

Intelligence is a contentious topic (see here David Pearce’s helpful breakdown of 6 of its facets). One particular facet of intelligence that I find enormously fascinating and largely under-explored is the ability to make sense of new modes of consciousness and then recruit them for computational and aesthetic purposes. THC and music production have a long history of synergy, for instance. A composer who successfully uses THC to generate musical ideas others find novel and meaningful is applying this sort of intelligence. THC-induced states of consciousness are largely dysfunctional for a lot of tasks. But someone who utilizes the sort of intelligence (or meta-intelligence) I’m pointing to will pay attention to the features of experience that do have some novel use and lean on those. THC might impair working memory, but it also expands and stretches musical space. Intensifies reverb, softens rough edges in heart notes, increases emotional range, and adds synesthetic brown noise (which can enhance stochastic resonance). With wit and determination (and co-morbid THC/music addiction), musical artists exploit the oddities of THC musicality to great effect, arguably some much more successfully than others.

The kind of reframe that I’d like you to consider is that we are all in fact something akin to these stoner musicians. We were born with this qualia resonator with lots of cavities, kinds of waves, levels of coupling, and so on. And it took years for us to train it to make adaptive representations of the environment. Along the way, we all (typically) develop a huge repertoire of self-organizing principles we deploy to render what we believe is happing out there in the world. The reason why an experience of “meditation on the wetness of water” can be incredibly powerful is not because you are literally tuning into the resonant frequency of the water around you and in you. No, it’s something very different. You are creating the conditions for the self-organizing principle that we already use to render our experiences with water to take over as the primary organizer of our experience. Since this self-organizing principle does not, by its nature, generate a center, full absorption into “water consciousness” also has a no-self quality to it. Same with the other elements. Excitingly, this way of thinking also opens up our mind about how to craft meditations from first principles. Namely, by creating a periodic table of self-organizing principles and then systematically trying combinations until we identify the laws of qualia chemistry.

You have to come to realize that your brain’s relationship with self-organizing principles is like that of a Pokémon trainer and his Pokémon (ideally in a situation where Pokémon play the Glass Bead Game with each other rather than try to hurt each other– more on that later). Or perhaps like that of a mathematician and clever tricks for proofs, or a musician and rhythmic patterns, and so on. Your brain is a highly tamed inner space qualia warp drive usually working at 1% or less. It has stores of finely balanced and calibrated self-organizing principles that will generate the right atmospheric change to your experience at the drop of a hat. We are usually unaware of how many moods, personalities, contexts, and feelings of the passage of time there are – your brain tries to learn them all so it has them in store for whenever needed. All of a sudden: haze and rain, unfathomable wind, mercury resting motionless. What kind of qualia chemistry did your brain just use to try to render those concepts?

We are using features of consciousness -and the self-organizing principles it affords- to solve problems all the time without explicitly modeling this fact. In my conception of sentient intelligence, being able to recruit self-organizing principles of consciousness for meaningful computation is a pillar of any meaningfully intelligent mind. I think that largely this is what we are doing when humans become extremely good at something (from balancing discs to playing chess and empathizing with each other). We are creating very specialized qualia by finding the right self-organizing principles and then purifying/increasing their quality. To do an excellent modern day job that demands constraint satisfaction at multiple levels of analysis at once likely requires us to form something akin to High-Entropy Alloys of Consciousness. That is, we are usually a judiciously chosen mixture of many self-organizing principles balanced just right to produce a particular niche effect.

Meta-Intelligence

David Pearce’s conception of Full-spectrum Superintelligence is inspiring because it takes into account the state-space of consciousness (and what matters) in judging the quality of a certain intelligence in addition to more traditional metrics. Indeed, as another key conceptual engineering move, I suggest that we can and need to enrich our conception of intelligence in addition to our conception of computation.

So here is my attempt at enriching it further and adding another perspective. One way we can think of intelligence is as the ability to map a problem to a self-organizing principle that will “solve it for you” and having the capacity to instantiate that self-organizing principle. In other words, intelligence is, at least partly, about efficiency: you are successful to the extent that you can take a task that would generally require a large number of manual operations (which take time, effort, and are error-prone) and solve it in an “embodied” way.

Ultimately, a complex system like the one we use for empathy mixes both serial and parallel self-organizing principles for computation. Empathy is enormously cognitively demanding rather than merely a personality trait (e.g. agreeableness), as it requires a complex mirroring capacity that stores and processes information in efficient ways. Exploring exotic states of consciousness is even more computationally demanding. Both are error-prone.

Succinctly, I suggest we consider:

One key facet of intelligence is the capacity to solve problems by breaking them down into two distinct subproblems: (1) find a suitable self-organizing principle you can instantiate reliably, and (2) find out how to translate your problem to a format that our self-organizing principle can be pointed at so that it solves it for us.

Here is a concrete example. If you want to disentangle a wire, you can try to first put it into a discrete datastructure like a graph, and then get the skeleton of the knot in a way that allows you to simplify it with Reidemeister moves (and get lost in the algorithmic complexity of the task). Or you could simply follow the lead of Yu et al. 2021 and make the surfaces repulsive and let this principle solve the problem for you

(source)

These repulsion-based disentanglement algorithm are explained in this video. Importantly, how to do this effectively still needs fine tuning. The method they ended up using was much faster than the (many) other ones tried (a Full-Spectrum Superintellligence would be able to “wiggle” the wires a bit if they got stuck, of course):

(source)

This is hopefully giving you new ways of thinking about computation and intelligence. The key point to realize is that these concepts are not set in stone, and to a large extent may limit our thinking about sentience and intelligence. 

Now, I don’t believe that if you simulate a self-organizing principle of this sort you will get a conscious mind. The whole point of using physics to solve your problem is that in some cases you get better performance than algorithmically representing a physical system and then using that simulation to instantiate self-organizing principles. Moreover physics simulations, to the extent they are implemented in classical computers, will fail to generate the same field boundaries that would be happening in the physical system. To note, physics-inspired simulations like [Yu et al 2021] are nonetheless enormously helpful to illustrate how to think of problem-solving with a massively parallel analog system.

Are Neural Cellular Automata Conscious?

The computational success of Neural Cellular Automata is primarily algorithmic. In essence, digitally implemented NCA are exploring a paradigm of selection and amplification of self-organizing principles, which is indeed a very different way of thinking about computation. But critically any NCA will still lack sentience. The main reasons are that they (a) don’t use physical fields with weak downward causation, and (b) don’t have a mechanism for binding/boundary making. Digitally-implemented cellular automata may have complex emergent behavior, but they generate no meaningful boundaries (i.e. objective, frame-invariant, causally-significant, and computationally-useful). That said, the computational aesthetic of NCA can be fruitfully imported to the study of Holistic Field Computing, in that the techniques for selecting and amplifying self-organizing principles already solved for NCAs may have analogues in how the brain recruits physical self-organizing principles for computation.

Exotic States of Consciousness

Perhaps one of the most compelling demonstrations of the possible zoo (or jungle) of self-organizing principles out of which your brain is recruiting but a tiny narrow range is to pay close attention to a DMT trip.

DMT states of consciousness are computationally non-trivial on many fronts. It is difficult to emphasize how enriched the set of experiential building blocks becomes in such states. Their scientific significance is hard to overstate. Importantly, the bulk of the computational power on DMT is dedicated to trying to make the experience feel good and not feel bad. The complexity involved in this task is often overwhelming. But one could envision a DMT-like state in which some parameters have been stabilized in order to recruit standardized self-organizing principles available only in a specific region of the energy-information landscape. I think that cataloguing the precise mathematical properties of the dynamics of attention and awareness on DMT will turn out to have enormous _computational_ value. And a lot of this computational value will generally be pointed towards aesthetic goals.

To give you a hint of what I’m talking about: A useful QRI model (indeed, algorithmic reduction) of the phenomenology of DMT is that it (a) activates high-frequency metronomes that shake your experience and energize it with a high-frequency vibe, and (b) a new medium of wave propagation gets generated that allows very disparate parts of one’s experience to interact with one another.

3D Space Group (CEV on low dose DMT)

At a sufficient dose, DMT’s secondary effect also makes your experience feel sort of “wet” and “saturated”. Your whole being can feel mercurial and liquidy (cf: Plasmatis and Jim Jam). A friend speculates that’s what it’s like for an experience to be one where everything is touching everything else (all at once).

There are many Indra’s Net-type experiences in this space. In brief, experiences where “each part reflects every other part” are an energy minimum that also reduces prediction errors. And there is a fascinating non-trivial connection with the Free Energy Principle, where experiences that minimize internal prediction errors may display a lot of self-similarity.

To a first approximation, I posit that the complex geometry of DMT experiences are indeed the non-linearities of the DMT-induced wave propagation medium that appear when it is sufficiently energized (so that it transitions from the linear to the non-linear regime). In other words, the complex hallucinations are energized patterns of non-linear resonance trying to radiate out their excess energy. Indeed, as you come down you experience the phenomenon of condensation of shapes of qualia.

Now, we currently don’t know what computational problems this uncharted cornucopia of self-organizing principles could solve efficiently. The situation is analogous to that of the ISING Solver discussed above: we have an incredibly powerful alien computer that will do wonders if we can speak its language, and nothing useful otherwise. Yes, DMT’s computational power is an alien computer in search of a problem that will fit its technical requirements.

Vibe-To-Shape-And-Back

Michael Johnson, Selen Atasoy, and Steven Lehar all have shaped my thinking about resonance in the nervous system. Steven Lehar in particular brought to my attention non-linear resonance as a principle of computation. In essays like The Constructive Aspect of Visual Perception he presents a lot of visual illusions for which non-linear resonance works as a general explanatory principle (and then in The Grand Illusion he reveals how his insights were informed by psychonautic exploration).

One of the cool phenomenological observations Lehar made based on his exploration with DXM was that each phenomenal object has its own resonant frequency. In particular, each object is constructed with waves interfering with each other at a high-enough energy that they bounce off each other (i.e. are non-linear). The relative vibration of the phenomenal objects is a function of the frequencies of resonance of the waves of energy bouncing off each other that are constructing the objects.

In this way, we can start to see how a “vibe” can be attributed to a particular phenomenal object. In essence, long intervals will create lower resonant frequencies. And if you combine this insight with QRI paradigms, you see how the vibe of an experience can modulate the valence (e.g. soft ADSR envelopes and consonance feeling pleasant, for instance). Indeed, on DMT you get to experience the high-dimensional version of music theory, where the valence of a scene is a function of the crazy-complex network of pairwise interactions between phenomenal objects with specific vibratory characteristics. Give thanks to annealing because tuning this manually would be a nightmare.

But then there is the “global” vibe…

Topological Pockets

So far I’ve provided examples of how Holistic Computing enriches our conception of intelligence, computing, and how it even shows up in our experience. But what I’ve yet to do is connect this with meaningful boundaries, as we set ourselves to do. In particular, I haven’t explained why Holistic Computing would arise out of topological boundaries.

For the purpose of this essay I’m defining a topological segment (or pocket) to be a region that can’t be expanded further without this becoming false: every point in the region locally belongs to the same connected space.

The Balloons’ Case

In the case of balloons this cashes out as: a topological segment is one where each point can go to any other point without having to go through connector points/lines/planes. It’s essentially the set of contiguous surfaces.

Now, each of these pockets can have both a rich set of connections to other pockets as well as intricate internal boundaries. The way we could justify Computational Holism being relevant here is that the topological pockets trap energy, and thus allow the pocket to vibrate in ways that express a lot of holistic information. Each contiguous surface makes a sound that represents its entire shape, and thus behaves as a unit in at least this way.

The General Case

An important note here is that I am not claiming that (a) all topological boundaries can be used for Holistic Computing, or (b) to have Holistic Computing you need to have topological boundaries. Rather, I’m claiming that the topological segmentation responsible for individuating experiences does have applications for Holistic Computing and that this conceptually makes sense and is why evolution bothered to make us conscious. But for the general case, you probably do get quite a bit of both Holistic Computing without topological segmentation and vice versa. For example an LC circuit can be used for Holistic Computing on the basis of its steady analog resonance, but I’m not sure if it creates a topological pocket in the EM fields per se.

At this stage of the research we don’t have a leading candidate for the precise topological feature of fields responsible for this. But the explanation space is promising based on being able to satisfy theoretical constraints that no other theory we know of can.

But I can nonetheless provide a proof of concept for how a topological pocket does come with really impactful holism. Let’s dive in!

Getting Holistic Behavior Out of a Topological Pocket

Creating a topological pocket may be consequential in one of several ways. One option for getting holistic behavior arises if you can “trap” energy in the pocket. As a consequence, you will energize its harmonics. The particular way the whole thing vibrates is a function of the entire shape at once. So from the inside, every patch now has information about the whole (namely, by the vibration it feels!).**

(image source)

One possible overarching self-organizing principle that the entire pocket may implement is valence-gradient ascent. In particular, some configurations of the field are more pleasant than others and this has to do with the complexity of the global vibe. Essentially, the reason no part of it wants to be in a pocket with certain asymmetries, is because those asymmetries actually make themselves known everywhere within the pocket by how the whole thing vibrates. Therefore, for the same reason a soap bubble can become spherical by each point on the surface trying to locally minimize tension, our experiences can become symmetrical and harmonious by having each “point” in them trying to maximize its local valence.

Self Mirroring

From Lehar’s Cartoon Epistemology

And here we arrive at perhaps one of the craziest but coolest aspects of Holistic Computing I’ve encountered. Essentially, if we go to the non-linear regime, then the whole vibe is not merely just the weighted sum of the harmonics of the system. Rather, you might have waves interfere with each other in a concentrated fashion in the various cores/clusters, and in turn these become non-linear structures that will try to radiate out their energy. And to maximize valence there needs to be a harmony between the energy coming in and out of these dense non-linearities. In our phenomenology this may perhaps point to our typical self-consciousness. In brief, we have an internal avatar that “reflects” the state of the whole! We are self-mirroring machines! Now this is really non-trivial (and non-linear) Holistic Computing.

Cut From the Same Fabric

So here is where we get to the crux of the insight. Namely, that weakly emergent topological changes can simultaneously have non-trivial causal/computational effects while also solving the boundary problem. We avoid strong emergence but still get a kind of ontological emergence: since consciousness is being cut out of one huge fabric of consciousness, we don’t ever need strong emergence in the form of “consciousness out of the blue all of a sudden”. What you have instead is a kind of ontological birth of an individual. The boundary legitimately created a new being, even if in a way the total amount of consciousness is the same. This is of course an outrageous claim (that you can get “individuals” by e.g. twisting the electric field in just the right way). But I believe the alternatives are far crazier once you understand what they entail.

In a Nutshell

To summarize, we can rule out any of the current computational systems implementing AI algorithms to have anything but trivial consciousness. If there are topological pockets created by e.g. GPUs/TPUs, they are epiphenomenal – the system is designed so that only the local influences it has hardcoded can affect the behavior at each step.

The reason the brain is different is that it has open avenues for solving the boundary problem. In particular, a topological segmentation of the EM field would be a satisfying option, as it would simultaneously give us both holistic field behavior (computationally useful) and a genuine natural boundary. It extends the kind of model explored by Johnjoe McFadden (Conscious Electromagnetic Information Field) and Susan Pockett (Consciousness Is a Thing, Not a Process). They (rightfully) point out that the EM field can solve the binding problem. The boundary problem, in turn, emerges. With topological boundaries, finally, you can get meaningful boundaries (objective, frame-invariant, causally-significant, and computationally-useful).

This conceptual framework both clarifies what kind of system is at minimum required for sentience, and also opens up a research paradigm for systematically exploring topological features of the fields of physics and their plausible use by the nervous system.


* See the “Self Mirroring” section to contrast the self-blindness of a lookup table and the self-awareness of sentient beings.

** More symmetrical shapes will tend to have more clean resonant modes. So to the extent that symmetry tracks fitness on some level (e.g. ability to shed off entropy), then quickly estimating the spectral complexity of an experience can tell you how far it is from global symmetry and possibly health (explanation inspired by: Johnson’s Symmetry Theory of Homeostatic Regulation).


See also:


Many thanks to Michael Johnson, David Pearce, Anders & Maggie, and Steven Lehar for many discussions about the boundary/binding problem. Thanks to Anders & Maggie and to Mike for discussions about valence in this context. And thanks to Mike for offering a steel-man of epiphenomenalism. Many thank yous to all our supporters! Much love!

Infinite bliss!

AI Alignment Podcast: On Consciousness, Qualia, and Meaning with Mike Johnson and Andrés Gómez Emilsson

Lucas Perry from the Future of Life Institute recently interviewed my co-founder Mike Johnson and I in his AI Alignment podcast. Here is the full transcript:


Lucas: Hey, everyone. Welcome back to the AI Alignment Podcast. I’m Lucas Perry, and today we’ll be speaking with Andrés Gomez Emilsson and Mike Johnson from the Qualia Research Institute. In this episode, we discuss the Qualia Research Institute’s mission and core philosophy. We get into the differences between and arguments for and against functionalism and qualia realism. We discuss definitions of consciousness, how consciousness might be causal, we explore Marr’s Levels of Analysis, we discuss the Symmetry Theory of Valence. We also get into identity and consciousness, and the world, the is-out problem, what this all means for AI alignment and building beautiful futures.

And then end on some fun bits, exploring the potentially large amounts of qualia hidden away in cosmological events, and whether or not our universe is something more like heaven or hell. And remember, if you find this podcast interesting or useful, remember to like, comment, subscribe, and follow us on your preferred listening platform. You can continue to help make this podcast better by participating in a very short survey linked in the description of wherever you might find this podcast. It really helps. Andrés is a consciousness researcher at QRI and is also the Co-founder and President of the Stanford Transhumanist Association. He has a Master’s in Computational Psychology from Stanford. Mike is Executive Director at QRI and is also a co-founder.

He is interested in neuroscience, philosophy of mind, and complexity theory. And so, without further ado, I give you Mike Johnson and Andrés Gomez Emilsson. So, Mike and Andrés, thank you so much for coming on. Really excited about this conversation and there’s definitely a ton for us to get into here.

Andrés: Thank you so much for having us. It’s a pleasure.

Mike: Yeah, glad to be here.

Lucas: Let’s start off just talking to provide some background about the Qualia Research Institute. If you guys could explain a little bit, your perspective of the mission and base philosophy and vision that you guys have at QRI. If you could share that, that would be great.

Andrés: Yeah, for sure. I think one important point is there’s some people that think that really what matters might have to do with performing particular types of algorithms, or achieving external goals in the world. Broadly speaking, we tend to focus on experience as the source of value, and if you assume that experience is a source of value, then really mapping out what is the set of possible experiences, what are their computational properties, and above all, how good or bad they feel seems like an ethical and theoretical priority to actually make progress on how to systematically figure out what it is that we should be doing.

Mike: I’ll just add to that, this thing called consciousness seems pretty confusing and strange. We think of it as pre-paradigmatic, much like alchemy. Our vision for what we’re doing is to systematize it and to do to consciousness research what chemistry did to alchemy.

Lucas: To sort of summarize this, you guys are attempting to be very clear about phenomenology. You want to provide a formal structure for understanding and also being able to infer phenomenological states in people. So you guys are realists about consciousness?

Mike: Yes, absolutely.

Lucas: Let’s go ahead and lay some conceptual foundations. On your website, you guys describe QRI’s full stack, so the kinds of metaphysical and philosophical assumptions that you guys are holding to while you’re on this endeavor to mathematically capture consciousness.

Mike: I would say ‘full stack’ talks about how we do philosophy of mind, we do neuroscience, and we’re just getting into neurotechnology with the thought that yeah, if you have a better theory of consciousness, you should be able to have a better theory about the brain. And if you have a better theory about the brain, you should be able to build cooler stuff than you could otherwise. But starting with the philosophy, there’s this conception of qualia of formalism; the idea that phenomenology can be precisely represented mathematically. You borrow the goal from Giulio Tononi’s IIT. We don’t necessarily agree with the specific math involved, but the goal of constructing a mathematical object that is isomorphic to a systems phenomenology would be the correct approach if you want to formalize phenomenology.

And then from there, one of the big questions in how you even start is, what’s the simplest starting point? And here, I think one of our big innovations that is not seen at any other research group is we’ve started with emotional valence and pleasure. We think these are not only very ethically important, but also just literally the easiest place to start reverse engineering.

Lucas: Right, and so this view is also colored by physicalism and quality of structuralism and valence realism. Could you explain some of those things in a non-jargony way?

Mike: Sure. Quality of formalism is this idea that math is the right language to talk about qualia in, and that we can get a precise answer. This is another way of saying that we’re realists about consciousness much as people can be realists about electromagnetism. We’re also valence realists. This refers to how we believe emotional valence, or pain and pleasure, the goodness or badness of an experience. We think this is a natural kind. This concept carves reality at the joints. We have some further thoughts on how to define this mathematically as well.

Lucas: So you guys are physicalists, so you think that basically the causal structure of the world is best understood by physics and that consciousness was always part of the game engine of the universe from the beginning. Ontologically, it was basic and always there in the same sense that the other forces of nature were already in the game engine since the beginning?

Mike: Yeah, I would say so. I personally like the frame of dual aspect monism, but I would also step back a little bit and say there’s two attractors in this discussion. One is the physicalist attractor, and that’s QRI. Another would be the functionalist/computationalist attractor. I think a lot of AI researchers are in this attractor and this is a pretty deep question of, if we want to try to understand what value is, or what’s really going on, or if we want to try to reverse engineer phenomenology, do we pay attention to bits or atoms? What’s more real; bits or atoms?

Lucas: That’s an excellent question. Scientific reductionism here I think is very interesting. Could you guys go ahead and unpack though the skeptics position of your view and broadly adjudicate the merits of each view?

Andrés: Maybe a really important frame here is called Marr’s Levels of Analyses. David Marr was a cognitive scientist, wrote a really influential book in the ’80s called On Vision where he basically creates a schema for how to understand knowledge about, in this particular case, how you actually make sense of the world visually. The framework goes as follows: you have three ways in which you can describe an information processing system. First of all, the computational/behavioral level. What that is about is understanding the input-output mapping of an information processing system. Part of it is also understanding the run-time complexity of the system and under what conditions it’s able to perform its actions. Here an analogy would be with an abacus, for example.

On the computational/behavioral level, what an abacus can do is add, subtract, multiply, divide, and if you’re really creative you can also exponentiate and do other interesting things. Then you have the algorithmic level of analysis, which is a little bit more detailed, and in a sense more constrained. What the algorithm level of analysis is about is figuring out what are the internal representations and possible manipulations of those representations such that you get the input output of mapping described by the first layer. Here you have an interesting relationship where understanding the first layer doesn’t fully constrain the second one. That is to say, there are many systems that have the same input output mapping but that under the hood uses different algorithms.

In the case of the abacus, an algorithm might be something whenever you want to add a number you just push a bead. Whenever you’re done with a row, you push all of the beads backs and then you add a bead in the row underneath. And finally, you have the implementation level of analysis, and that is, what is the system actually made of? How is it constructed? All of these different levels ultimately also map onto different theories of consciousness, and that is basically where in the stack you associate consciousness, or being, or “what matters”. So, for example, behaviorists in the ’50s, they may associate consciousness, if they give any credibility to that term, with the behavioral level. They don’t really care what’s happening inside as long as you have extended pattern of reinforcement learning over many iterations.

What matters is basically how you’re behaving and that’s the crux of who you are. A functionalist will actually care about what algorithms you’re running, how is it that you’re actually transforming the input into the output. Functionalists generally do care about, for example, brain imaging, they do care about the high level algorithms that the brain is running, and generally will be very interested in figuring out these algorithms and generalize them in fields like machine learning and digital neural networks and so on. A physicalist associate consciousness at the implementation level of analysis. How the system is physically constructed, has bearings on what is it like to be that system.

Lucas: So, you guys haven’t said that this was your favorite approach, but if people are familiar with David Chalmers, these seem to be the easy problems, right? And functionalists are interested in just the easy problems and some of them will actually just try to explain consciousness away, right?

Mike: Yeah, I would say so. And I think to try to condense some of the criticism we have of functionalism, I would claim that it looks like a theory of consciousness and can feel like a theory of consciousness, but it may not actually do what we need a theory of consciousness to do; specify which exact phenomenological states are present.

Lucas: Is there not some conceptual partitioning that we need to do between functionalists who believe in qualia or consciousness, and those that are illusionists or want to explain it away or think that it’s a myth?

Mike: I think that there is that partition, and I guess there is a question of how principled the partition you can be, or whether if you chase the ideas down as far as you can, the partition collapses. Either consciousness is a thing that is real in some fundamental sense and I think you can get there with physicalism, or consciousness is more of a process, a leaky abstraction. I think functionalism naturally tugs in that direction. For example, Brian Tomasik has followed this line of reasoning and come to the conclusion of analytic functionalism, which is trying to explain away consciousness.

Lucas: What is your guys’s working definition of consciousness and what does it mean to say that consciousness is real.

Mike: It is a word that’s overloaded. It’s used in many contexts. I would frame it as what it feels like to be something, and something is conscious if there is something it feels like to be that thing.

Andrés: It’s important also to highlight some of its properties. As Mike pointed out, “consciousness” is used in many different ways. There’s like eight definitions for the word consciousness, and honestly, all of them are really interesting. Some of them are more fundamental than others and we tend to focus on the more fundamental side of the spectrum for the word. A sense that would be very not fundamental would be consciousness in the sense of social awareness or something like that. We actually think of consciousness much more in terms of qualia; what is it like to be something? What is it like to exist? Some of the key properties of consciousness are as follows: First of all, we do think it exists.

Second, in some sense it has causal power in the sense that the fact that we are conscious matters for evolution, evolution made us conscious for a reason that it’s actually doing some computational legwork that would be maybe possible to do, but just not as efficient or not as conveniently as it is possible with consciousness. Then also you have the property of qualia, the fact that we can experience sights, and colors, and tactile sensations, and thoughts experiences, and emotions, and so on, and all of these are in completely different worlds, and in a sense they are, but they have the property that they can be part of a unified experience that can experience color at the same time as experiencing sound. That sends those different types of sensations, we describe them as the category of consciousness because they can be experienced together.

And finally, you have unity, the fact that you have the capability of experiencing many qualia simultaneously. That’s generally a very strong claim to make, but we think you need to acknowledge and take seriously its unity.

Lucas: What are your guys’s intuition pumps for thinking why consciousness exists as a thing? Why is there a qualia?

Andrés: There’s the metaphysical question of why consciousness exists to begin within. That’s something I would like to punt for the time being. There’s also the question of why was it recruited for information processing purposes in animals? The intuition here is that there are various contrasts that you can have within experience, which can serve a computational role. So, there may be a very deep reason why color qualia or visual qualia is used for information processing associated with sight, and why tactile qualia is associated with information processing useful for touching and making haptic representations, and that might have to do with the actual map of how all the qualia values are related to each other. Obviously, you have all of these edge cases, people who are seeing synesthetic.

They may open their eyes and they experience sounds associated with colors, and people tend to think of those as abnormal. I would flip it around and say that we are all synesthetic, it’s just that the synesthesia that we have in general is very evolutionarily adaptive. The reason why you experience colors when you open your eyes is that that type of qualia is really well suited to represent geometrically a projective space. That’s something that naturally comes out of representing the world with the sensory apparatus like eyes. That doesn’t mean that there aren’t other ways of doing it. It’s possible that you could have an offshoot of humans that whenever they opened their eyes, they experience sound and they use that very well to represent the visual world.

But we may very well be in a local maxima of how different types of qualia are used to represent and do certain types of computations in a very well-suited way. It’s like the intuition behind why we’re conscious, is that all of these different contrasts in the structure of the relationship of possible qualia values has computational implications, and there’s actual ways of using this contrast in very computationally effective ways.

Lucas: So, just to channel the functionalist here, wouldn’t he just say that everything you just said about qualia could be fully reducible to input output and algorithmic information processing? So, why do we need this extra property of qualia?

Andrés: There’s this article, I believe is by Brian Tomasik that basically says, flavors of consciousness are flavors of computation. It might be very useful to do that exercise, where basically you identify color qualia as just a certain type of computation and it may very well be that the geometric structure of color is actually just a particular algorithmic structure, that whenever you have a particular type of algorithmic information processing, you get these geometric state-space. In the case of color, that’s a Euclidean three-dimensional space. In the case of tactile or smell qualia, it might be a much more complicated space, but then it’s in a sense implied by the algorithms that we run. There is a number of good arguments there.

The general approach to how to tackle them is that when it comes down to actually defining what algorithms a given system is running, you will hit a wall when you try to formalize exactly how to do it. So, one example is, how do you determine the scope of an algorithm? When you’re analyzing a physical system and you’re trying to identify what algorithm it is running, are you allowed to basically contemplate 1,000 atoms? Are you allowed to contemplate a million atoms? Where is a natural boundary for you to say, “Whatever is inside here can be part of the same algorithm, but whatever is outside of it can’t.” And, there really isn’t a frame-invariant way of making those decisions. On the other hand, if you ask to see a qualia with actual physical states, there is a frame-invariant way of describing what the system is.

Mike: So, a couple of years ago I posted a piece giving a critique of functionalism and one of the examples that I brought up was, if I have a bag of popcorn and I shake the bag of popcorn, did I just torture someone? Did I just run a whole brain emulation of some horrible experience, or did I not? There’s not really an objective way to determine which algorithms a physical system is objectively running. So this is a kind of an unanswerable question from the perspective of functionalism, whereas with the physical theory of consciousness, it would have a clear answer.

Andrés: Another metaphor here is, let’s say you’re at a park enjoying an ice cream. In this system that I created that has, let’s say isomorphic algorithms to whatever is going on in your brain, the particular algorithms that your brain is running in that precise moment within a functionalist paradigm maps onto a metal ball rolling down one of the paths within these machine in a straight line, not touching anything else. So there’s actually not much going on. According to functionalism, that would have to be equivalent and it would actually be generating your experience. Now the weird thing there is that you could actually break the machine, you could do a lot of things and the behavior of the ball would not change.

Meaning that within functionalism, and to actually understand what a system is doing, you need to understand the counter-factuals of the system. You need to understand, what would the system be doing if the input had been different? And all of a sudden, you end with this very, very gnarly problem of defining, well, how do you actually objectively decide what is the boundary of the system? Even some of these particular states that allegedly are very complicated, the system looks extremely simple, and you can remove a lot of parts without actually modifying its behavior. Then that casts in question whether there is an objective boundary, any known arbitrary boundary that you can draw around the system and say, “Yeah, this is equivalent to what’s going on in your brain,” right now.

This has a very heavy bearing on the binding problem. The binding problem for those who haven’t heard of it is basically, how is it possible that 100 billion neurons just because they’re skull-bound, spatially distributed, how is it possible that they simultaneously contribute to a unified experience as opposed to, for example, neurons in your brain and neurons in my brain contributing to a unified experience? You hit a lot of problems like what is the speed of propagation of information for different states within the brain? I’ll leave it at that for the time being.

Lucas: I would just like to be careful about this intuition here that experience is unified. I think that the intuition pump for that is direct phenomenological experience like experience seems unified, but experience also seems a lot of different ways that aren’t necessarily descriptive of reality, right?

Andrés: You can think of it as different levels of sophistication, where you may start out with a very naive understanding of the world, where you confuse your experience for the world itself. A very large percentage of people perceive the world and in a sense think that they are experiencing the world directly, whereas all the evidence indicates that actually you’re experiencing an internal representation. You can go and dream, you can hallucinate, you can enter interesting meditative states, and those don’t map to external states of the world.

There’s this transition that happens when you realize that in some sense you’re experiencing a world simulation created by your brain, and of course, you’re fooled by it in countless ways, especially when it comes to emotional things that we look at a person and we might have an intuition of what type of person that person is, and that if we’re not careful, we can confuse our intuition, we can confuse our feelings with truth as if we were actually able to sense their souls, so to speak, rather than, “Hey, I’m running some complicated models on people-space and trying to carve out who they are.” There’s definitely a lot of ways in which experience is very deceptive, but here I would actually make an important distinction.

When it comes to intentional content, and intentional content is basically what the experience is about, for example, if you’re looking at a chair, there’s the quality of chairness, the fact that you understand the meaning of chair and so on. That is usually a very deceptive part of experience. There’s another way of looking at experience that I would say is not deceptive, which is the phenomenal character of experience; how it presents itself. You can be deceived about basically what the experience is about, but you cannot be deceived about how you’re having the experience, how you’re experiencing it. You can infer based on a number of experiences that the only way for you to even actually experience a given phenomenal object is to incorporate a lot of that information into a unified representation.

But also, if you just pay attention to your experience that you can simultaneously place your attention in two spots of your visual field and make them harmonized. That’s a phenomenal character and I would say that there’s a strong case to be made to not doubt that property.

Lucas: I’m trying to do my best to channel the functionalist. I think he or she would say, “Okay, so what? That’s just more information processing, and i’ll bite the bullet on the binding problem. I still need some more time to figure that out. So what? It seems like these people who believe in qualia have an even tougher job of trying to explain this extra spooky quality in the world that’s different from all the other physical phenomenon that science has gone into.” It also seems to violate Occam’s razor or a principle of lightness where one’s metaphysics or ontology would want to assume the least amount of extra properties or entities in order to try to explain the world. I’m just really trying to tease out your best arguments here for qualia realism as we do have this current state of things in AI alignment where most people it seems would either try to explain away consciousness, would say it’s an illusion, or they’re anti-realist about qualia.

Mike: That’s a really good question, a really good frame. And I would say our strongest argument revolves around predictive power. Just like centuries ago, you could absolutely be a skeptic about, shall we say, electromagnetism realism. And you could say, “Yeah, I mean there is this thing we call static, and there’s this thing we call lightning, and there’s this thing we call load stones or magnets, but all these things are distinct. And to think that there’s some unifying frame, some deep structure of the universe that would tie all these things together and highly compress these phenomenon, that’s crazy talk.” And so, this is a viable position today to say that about consciousness, that it’s not yet clear whether consciousness has deep structure, but we’re assuming it does, and we think that unlocks a lot of predictive power.

We should be able to make predictions that are both more concise and compressed and crisp than others, and we should be able to make predictions that no one else can.

Lucas: So what is the most powerful here about what you guys are doing? Is it the specific theories and assumptions which you take are falsifiable?

Mike: Yeah.

Lucas: If we can make predictive assessments of these things, which are either leaky abstractions or are qualia, how would we even then be able to arrive at a realist or anti-realist view about qualia?

Mike: So, one frame on this is, it could be that one could explain a lot of things about observed behavior and implicit phenomenology through a purely functionalist or computationalist lens, but maybe for a given system it might take 10 terabytes. And if you can get there in a much simpler way, if you can explain it in terms of three elegant equations instead of 10 terabytes, then it wouldn’t be proof that there exists some crystal clear deep structure at work. But it would be very suggestive. Marr’s Levels of Analysis are pretty helpful here, where a functionalist might actually be very skeptical of consciousness mattering at all because it would say, “Hey, if you’re identifying consciousness at the implementation level of analysis, how could that have any bearing on how we are talking about, how we understand the world, how we’d behave?

Since the implementational level is kind of epiphenomenal from the point of view of the algorithm. How can an algorithm know its own implementation, all it can maybe figure out its own algorithm, and it’s identity would be constrained to its own algorithmic structure.” But that’s not quite true. In fact, there is bearings on one level of analysis onto another, meaning in some cases the implementation level of analysis doesn’t actually matter for the algorithm, but in some cases it does. So, if you were implementing a computer, let’s say with water, you have the option of maybe implementing a Turing machine with water buckets and in that case, okay, the implementation level of analysis goes out the window in terms of it doesn’t really help you understand the algorithm.

But if how you’re using water to implement algorithms is by basically creating this system of adding waves in buckets of different shapes, with different resonant modes, then the implementation level of analysis actually matters a whole lot for what algorithms are … finely tuned to be very effective in that substrate. In the case of consciousness and how we behave, we do think properties of the substrate have a lot of bearings on what algorithms we actually run. A functionalist should actually start caring about consciousness if the properties of consciousness makes the algorithms more efficient, more powerful.

Lucas: But what if qualia and consciousness are substantive real things? What if the epiphenomenonalist true and is like smoke rising from computation and it doesn’t have any causal efficacy?

Mike: To offer a re-frame on this, I like this frame of dual aspect monism better. There seems to be an implicit value judgment on epiphenomenalism. It’s seen as this very bad thing if a theory implies qualia as epiphenomenal. Just to put cards on the table, I think Andrés and I differ a little bit on how we see these things, although I think our ideas also mesh up well. But I would say that under the frame of something like dual aspect monism, that there’s actually one thing that exists, and it has two projections or shadows. And one projection is the physical world such as we can tell, and then the other projection is phenomenology, subjective experience. These are just two sides of the same coin and neither is epiphenomenal to the other. It’s literally just two different angles on the same thing.

And in that sense, qualia values and physical values are really talking about the same thing when you get down to it.

Lucas: Okay. So does this all begin with this move that Descartes makes, where he tries to produce a perfectly rational philosophy or worldview by making no assumptions and then starting with experience? Is this the kind of thing that you guys are doing in taking consciousness or qualia to be something real or serious?

Mike: I can just speak for myself here, but I would say my intuition comes from two places. One is staring deep into the beast of functionalism and realizing that it doesn’t lead to a clear answer. My model is that it just is this thing that looks like an answer but can never even in theory be an answer to how consciousness works. And if we deny consciousness, then we’re left in a tricky place with ethics and moral value. It also seems to leave value on the table in terms of predictions, that if we can assume consciousness as real and make better predictions, then that’s evidence that we should do that.

Lucas: Isn’t that just an argument that it would be potentially epistemically useful for ethics if we could have predictive power about consciousness?

Mike: Yeah. So, let’s assume that it’s 100 years, or 500 years, or 1,000 years in the future, and we’ve finally cracked consciousness. We’ve finally solved it. My open question is, what does the solution look like? If we’re functionalists, what does the solution look like? If we’re physicalists, what does the solution look like? And we can expand this to ethics as well.

Lucas: Just as a conceptual clarification, the functionalists are also physicalists though, right?

Andrés: There is two senses of the word physicalism here. So if there’s physicalism in the sense of like a theory of the universe, that the behavior of matter and energy, what happens in the universe is exhaustively described by the laws of physics, or future physics, there is also physicalism in the sense of understanding consciousness in contrast to functionalism. David Pearce, I think, would describe it as non-materialist physicalist idealism. There’s definitely a very close relationship between that phrasing and dual aspect monism. I can briefly unpack it. Basically non materialist is not saying that the stuff of the world is fundamentally unconscious. That’s something that materialism claims, that what the world is made of is not conscious, is raw matter so to speak.

Andrés: Physicalist, again in the sense of the laws of physics exhaustively describe behavior and idealist in the sense of what makes up the world is qualia or consciousness. The big picture view is that the actual substrate of the universe of quantum fields are fields of qualia.

Lucas: So Mike, you were saying that in the future when we potentially have a solution to the problem of consciousness, that in the end, the functionalists with algorithms and explanations of say all of the easy problems, all of the mechanisms behind the things that we call consciousness, you think that that project will ultimately fail?

Mike: I do believe that, and I guess my gentle challenge to functionalists would be to sketch out a vision of what a satisfying answer to consciousness would be, whether it’s completely explaining it a way or completely explaining it. If in 500 years you go to the local bookstore and you check out consciousness 101, and just flip through it, you look at the headlines and the chapter list and the pictures, what do you see? I think we have an answer as formalists, but I would be very interested in getting the functionalists state on this.

Lucas: All right, so you guys have this belief in the ability to formalize our understanding of consciousness, is this actually contingent on realism or anti realism?

Mike: It is implicitly dependent on realism, that consciousness is real enough to be describable mathematically in a precise sense. And actually that would be my definition of realism, that something is real if we can describe it exactly with mathematics and it is instantiated in the universe. I think the idea of connecting math and consciousness is very core to formalism.

Lucas: What’s particularly interesting here are the you’re making falsifiable claims about phenomenological states. It’s good and exciting that your Symmetry Theory of Valence, which we can get into now has falsifiable aspects. So do you guys want to describe here your Symmetry Theory of Valence and how this fits in and as a consequence of your valence realism?

Andrés: Sure, yeah. I think like one of the key places where this has bearings on is and understanding what is it that we actually want and what is it that we actually like and enjoy. That will be answered in an agent way. So basically you think of agents as entities who spin out possibilities for what actions to take and then they have a way of sorting them by expected utility and then carrying them out. A lot of people may associate what we want or what we like or what we care about at that level, the agent level, whereas we think actually the true source of value is more low level than that. That there’s something else that we’re actually using in order to implement agentive behavior. There’s ways of experiencing value that are completely separated from agents. You don’t actually need to be generating possible actions and evaluating them and enacting them for there to be value or for you to actually be able to enjoy something.

So what we’re examining here is actually what is the lower level property that gives rise even to agentive behavior that underlies every other aspect of experience. These would be a valence and specifically valence gradients. The general claim is that we are set up in such a way that we are basically climbing the valence gradient. This is not true in every situation, but it’s mostly true and it’s definitely mostly true in animals. And then the question becomes what implements valence gradients. Perhaps your intuition is this extraordinary fact that things that have nothing to do with our evolutionary past nonetheless can feel good or bad. So it’s understandable that if you hear somebody scream, you may get nervous or anxious or fearful or if you hear somebody laugh you may feel happy.

That makes sense from an evolutionary point of view, but why would the sound of the Bay Area Rapid Transit, the Bart, which creates these very intense screeching sounds, that is not even within like the vocal range of humans, it’s just really bizarre, never encountered before in our evolutionary past and nonetheless, it has an extraordinarily negative valence. That’s like a hint that valence has to do with patterns, it’s not just goals and actions and utility functions, but the actual pattern of your experience may determine valence. The same goes for a SUBPAC, is this technology that basically renders sounds between 10 and 100 hertz and some of them feel really good, some of them feel pretty unnerving, some of them are anxiety producing and it’s like why would that be the case? Especially when you’re getting two types of input that have nothing to do with our evolutionary past.

It seems that there’s ways of triggering high and low valence states just based on the structure of your experience. The last example I’ll give is very weird states of consciousness like meditation or psychedelics that seem to come with extraordinarily intense and novel forms of experiencing significance or a sense of bliss or pain. And again, they don’t seem to have much semantic content per se or rather the semantic content is not the core reason why they feel that they’re bad. It has to do more with a particular structure that they induce in experience.

Mike: There are many ways to talk about where pain and pleasure come from. We can talk about it in terms of neuro chemicals, opioids, dopamine. We can talk about it in terms of pleasure centers in the brain, in terms of goals and preferences and getting what you want, but all these have counterexamples. All of these have some points that you can follow the thread back to which will beg the question. I think the only way to explain emotional valence, pain and pleasure, that doesn’t beg the question is to explain it in terms of some patterns within phenomenology, just intrinsically feel good and some intrinsically feel bad. To touch back on the formalism brain, this would be saying that if we have a mathematical object that is isomorphic to your phenomenology, to what it feels like to be you, then some pattern or property of this object will refer to or will sort of intrinsically encode you are emotional valence, how pleasant or unpleasant this experiences.

That’s the valence formalism aspect that we’ve come to.

Lucas: So given the valence realism, the view is this intrinsic pleasure, pain axis of the world and this is sort of challenging I guess David Pearce’s view. There are things in experience which are just clearly good seeming or bad seeming. Will MacAskill called these pre theoretic properties we might ascribe to certain kinds of experiential aspects, like they’re just good or bad. So with this valence realism view, this potentiality in this goodness or badness whose nature is sort of self intimatingly disclosed in the physics and in the world since the beginning and now it’s unfolding and expressing itself more so and the universe is sort of coming to life, and embedded somewhere deep within the universe’s structure are these intrinsically good or intrinsically bad valances which complex computational systems and maybe other stuff has access to.

Andrés: Yeah, yeah, that’s right. And I would perhaps emphasize that it’s not only pre-theoretical, it’s pre-agentive, you don’t even need an agent for there to be valence.

Lucas: Right. Okay. This is going to be a good point I think for getting into these other more specific hairy philosophical problems. Could you go ahead and unpack a little bit more this view that pleasure or pain is self intimatingly good or bad that just by existing and experiential relation with the thing its nature is disclosed. Brian Tomasik here, and I think functionalists would say there’s just another reinforcement learning algorithm somewhere before that is just evaluating these phenomenological states. They’re not intrinsically or bad, that’s just what it feels like to be the kind of agent who has that belief.

Andrés: Sure. There’s definitely many angles from which to see this. One of them is by basically realizing that liking, wanting and learning are possible to dissociate, and in particular you’re going to have reinforcement without an associated positive valence. You can have also positive valence without reinforcement or learning. Generally they are correlated but they are different things. My understanding is a lot of people who may think of valence as something we believe matters because you are the type of agent that has a utility function and a reinforcement function. If that was the case, we would expect valence to melt away in states that are non agentive, we wouldn’t necessarily see it. And also that it would be intrinsically tied to intentional content, the aboutness of experience. A very strong counter example is that somebody may claim that really what they truly want this to be academically successful or something like that.

They think of the reward function as intrinsically tied to getting a degree or something like that. I would call that to some extent illusory, that if you actually look at how those preferences are being implemented, that deep down there would be valence gradients happening there. One way to show this would be let’s say the person on the graduation day, you give them an opioid antagonist. The person will subjectively feel that the day is meaningless, you’ve removed the pleasant cream of the experience that they were actually looking for, that they thought all along was tied in with intentional content with the fact of graduating but in fact it was the hedonic gloss that they were after, and that’s kind of like one intuition pump part there.

Lucas: These core problem areas that you’ve identified in Principia Qualia, would you just like to briefly touch on those?

Mike: Yeah, trying to break the problem down into modular pieces with the idea that if we can decompose the problem correctly then the sub problems become much easier than the overall problem and if you collect all the solutions to the sub problem than in aggregate, you get a full solution to the problem of consciousness. So I’ve split things up into the metaphysics, the math and the interpretation. The first question is what metaphysics do you even start with? What ontology do you even try to approach the problem? And we’ve chosen the ontology of physics that can objectively map onto reality in a way that computation can not. Then there’s this question of, okay, so you have your core ontology in this case physics, and then there’s this question of what counts, what actively contributes to consciousness? Do we look at electrons, electromagnetic fields, quarks?

This is an unanswered question. We have hypotheses but we don’t have an answer. Moving into the math, conscious system seemed to have boundaries, if something’s happening inside my head it can directly contribute to my conscious experience. But even if we put our heads together, literally speaking, your consciousness doesn’t bleed over into mine, there seems to be a boundary. So one way of framing this is the boundary problem and one way it’s framing it is the binding problem, and these are just two sides of the same coin. There’s this big puzzle of how do you draw the boundaries of a subject experience. IIT is set up to approach consciousness in itself through this lens that has a certain style of answer, style of approach. We don’t necessarily need to take that approach, but it’s a intellectual landmark. Then we get into things like the state-space problem and the topology of information problem.

If we figured out our basic ontology of what we think is a good starting point and of that stuff, what actively contributes to consciousness, and then we can figure out some principled way to draw a boundary around, okay, this is conscious experience A and this conscious experience B, and they don’t overlap. So you have a bunch of the information inside the boundary. Then there’s this math question of how do you rearrange it into a mathematical object that is isomorphic to what that stuff feels like. And again, IIT has an approach to this, we don’t necessarily ascribe to the exact approach but it’s good to be aware of. There’s also the interpretation problem, which is actually very near and dear to what QRI is working on and this is the concept of if you had a mathematical object that represented what it feels like to be you, how would we even start to figure out what it meant?

Lucas: This is also where the falsifiability comes in, right? If we have the mathematical object and we’re able to formally translate that into phenomenological states, then people can self report on predictions, right?

Mike: Yes. I don’t necessarily fully trust self reports as being the gold standard. I think maybe evolution is tricky sometimes and can lead to inaccurate self report, but at the same time it’s probably pretty good, and it’s the best we have for validating predictions.

Andrés: A lot of this gets easier if we assume that maybe we can be wrong in an absolute sense but we’re often pretty well calibrated to judge relative differences. Maybe you ask me how I’m doing on a scale of one to ten and I say seven and the reality is a five, maybe that’s a problem, but at the same time I like chocolate and if you give me some chocolate and I eat it and that improves my subjective experience and I would expect us to be well calibrated in terms of evaluating whether something is better or worse.

Lucas: There’s this view here though that the brain is not like a classical computer, that it is more like a resonant instrument.

Mike: Yeah. Maybe an analogy here it could be pretty useful. There’s this researcher William Sethares who basically figured out the way to quantify the mutual dissonance between pairs of notes. It turns out that it’s not very hard, all you need to do is add up the pairwise dissonance between every harmonic of the notes. And what that gives you is that if you take for example a major key and you compute the average dissonance between pairs of notes within that major key it’s going to be pretty good on average. And if you take the average dissonance of a minor key it’s going to be higher. So in a sense what distinguishes the minor and a major key is in the combinatorial space of possible permutations of notes, how frequently are they dissonant versus consonant.

That’s a very ground truth mathematical feature of a musical instrument and that’s going to be different from one instrument to the next. With that as a backdrop, we think of the brain and in particular valence in a very similar light that the brain has natural resonant modes and emotions may seem externally complicated. When you’re having a very complicated emotion and we ask you to describe it it’s almost like trying to describe a moment in a symphony, this very complicated composition and how do you even go about it. But deep down the reason why a particular frame sounds pleasant or unpleasant within music is ultimately tractable to the additive per wise dissonance of all of those harmonics. And likewise for a given state of consciousness we suspect that very similar to music the average pairwise dissonance between the harmonics present on a given point in time will be strongly related to how unpleasant the experience is.

These are electromagnetic waves and it’s not exactly like a static or it’s not exactly a standing wave either, but it gets really close to it. So basically what this is saying is there’s this excitation inhibition wave function and that happens statistically across macroscopic regions of the brain. There’s only a discrete number of ways in which that way we can fit an integer number of times in the brain. We’ll give you a link to the actual visualizations for what this looks like. There’s like a concrete example, one of the harmonics with the lowest frequency is basically a very simple one where interviewer hemispheres are alternatingly more excited versus inhibited. So that will be a low frequency harmonic because it is very spatially large waves, an alternating pattern of excitation. Much higher frequency harmonics are much more detailed and obviously hard to describe, but visually generally speaking, the spatial regions that are activated versus inhibited are these very thin wave fronts.

It’s not a mechanical wave as such, it’s a electromagnetic wave. So it would actually be the electric potential in each of these regions of the brain fluctuates, and within this paradigm on any given point in time you can describe a brain state as a weighted sum of all of its harmonics, and what that weighted sum looks like depends on your state of consciousness.

Lucas: Sorry, I’m getting a little caught up here on enjoying resonant sounds and then also the valence realism. The view isn’t that all minds will enjoy resonant things because happiness is like a fundamental valence thing of the world and all brains who come out of evolution should probably enjoy resonance.

Mike: It’s less about the stimulus, it’s less about the exact signal and it’s more about the effect of the signal on our brains. The resonance that matters, the resonance that counts, or the harmony that counts we’d say, or in a precisely technical term, the consonance that counts is the stuff that happens inside our brains. Empirically speaking most signals that involve a lot of harmony create more internal consonance in these natural brain harmonics than for example, dissonant stimuli. But the stuff that counts is inside the head, not the stuff that is going in our ears.

Just to be clear about QRI’s move here, Selen Atasoy has put forth this connectome-specific harmonic wave model and what we’ve done is combined it with our symmetry theory of valence and said this is sort of a way of basically getting a Fourier transform of where the energy is in terms of frequencies of brainwaves in a much cleaner way than has been available through EEG. Basically we can evaluate this data set for harmony. How much harmony is there in a brain, with the link to the Symmetry Theory of Valence then it should be a very good proxy for how pleasant it is to be that brain.

Lucas: Wonderful.

Andrés: In this context, yeah, the Symmetry Theory of Valence would be much more fundamental. There’s probably many ways of generating states of consciousness that are in a sense completely unnatural that are not based on the harmonics of the brain, but we suspect the bulk of the differences in states of consciousness would cash out in differences in brain harmonics because that’s a very efficient way of modulating the symmetry of the state.

Mike: Basically, music can be thought of as a very sophisticated way to hack our brains into a state of greater consonance, greater harmony.

Lucas: All right. People should check out your Principia Qualia, which is the work that you’ve done that captures a lot of this well. Is there anywhere else that you’d like to refer people to for the specifics?

Mike: Principia qualia covers the philosophical framework and the symmetry theory of valence. Andrés has written deeply about this connectome-specific harmonic wave frame and the name of that piece is Quantifying Bliss.

Lucas: Great. I would love to be able to quantify bliss and instantiate it everywhere. Let’s jump in here into a few problems and framings of consciousness. I’m just curious to see if you guys have any comments on ,the first is what you call the real problem of consciousness and the second one is what David Chalmers calls the Meta problem of consciousness. Would you like to go ahead and start off here with just this real problem of consciousness?

Mike: Yeah. So this gets to something we were talking about previously, is consciousness real or is it not? Is it something to be explained or to be explained away? This cashes out in terms of is it something that can be formalized or is it intrinsically fuzzy? I’m calling this the real problem of consciousness, and a lot depends on the answer to this. There are so many different ways to approach consciousness and hundreds, perhaps thousands of different carvings of the problem, panpsychism, we have dualism, we have non materialist physicalism and so on. I think essentially the core distinction, all of these theories sort themselves into two buckets, and that’s is consciousness real enough to formalize exactly or not. This frame is perhaps the most useful frame to use to evaluate theories of consciousness.

Lucas: And then there’s a Meta problem of consciousness which is quite funny, it’s basically like why have we been talking about consciousness for the past hour and what’s all this stuff about qualia and happiness and sadness? Why do people make claims about consciousness? Why does it seem to us that there is maybe something like a hard problem of consciousness, why is it that we experience phenomenological states? Why isn’t everything going on with the lights off?

Mike: I think this is a very clever move by David Chalmers. It’s a way to try to unify the field and get people to talk to each other, which is not so easy in the field. The Meta problem of consciousness doesn’t necessarily solve anything but it tries to inclusively start the conversation.

Andrés: The common move that people make here is all of these crazy things that we think about consciousness and talk about consciousness, that’s just any information processing system modeling its own attentional dynamics. That’s one illusionist frame, but even within qualia realist, qualia formalist paradigm, you still have the question of why do we even think or self reflect about consciousness. You could very well think of consciousness as being computationally relevant, you need to have consciousness and so on, but still lacking introspective access. You could have these complicated conscious information processing systems, but they don’t necessarily self reflect on the quality of their own consciousness. That property is important to model and make sense of.

We have a few formalisms that may give rise to some insight into how self reflectivity happens and in particular how is it possible to model the entirety of your state of consciousness in a given phenomenal object. These ties in with the notion of a homonculei, if the overall valence of your consciousness is actually a signal traditionally used for fitness evaluation, detecting basically when are you in existential risk to yourself or when there’s like reproductive opportunities that you may be missing out on, that it makes sense for there to be a general thermostat of the overall experience where you can just look at it and you get a a sense of the overall well being of the entire experience added together in such a way that you experienced them all at once.

I think like a lot of the puzzlement has to do with that internal self model of the overall well being of the experience, which is something that we are evolutionarily incentivized to actually summarize and be able to see at a glance.

Lucas: So, some people have a view where human beings are conscious and they assume everyone else is conscious and they think that the only place for value to reside is within consciousness, and that a world without consciousness is actually a world without any meaning or value. Even if we think that say philosophical zombies or people who are functionally identical to us but with no qualia or phenomenological states or experiential states, even if we think that those are conceivable, then it would seem that there would be no value in a world of p-zombies. So I guess my question is why does phenomenology matter? Why does the phenomenological modality of pain and pleasure or valence have some sort of special ethical or experiential status unlike qualia like red or blue?

Why does red or blue not disclose some sort of intrinsic value in the same way that my suffering does or my bliss does or the suffering or bliss of other people?

Mike: My intuition is also that consciousness is necessary for value. Nick Bostrom has this wonderful quote in super intelligence that we should be wary of building a Disneyland with no children, some technological wonderland that is filled with marvels of function but doesn’t have any subjective experience, doesn’t have anyone to enjoy it basically. I would just say that I think that most AI safety research is focused around making sure there is a Disneyland, making sure, for example, that we don’t just get turned into something like paperclips. But there’s this other problem, making sure there are children, making sure there are subjective experiences around to enjoy the future. I would say that there aren’t many live research threads on this problem and I see QRI as a live research thread on how to make sure there is subject experience in the future.

Probably a can of worms there, but as your question about in pleasure, I may pass that to my colleague Andrés.

Andrés: Nothing terribly satisfying here. I would go with David Pearce’s view that these properties of experience are self intimating and to the extent that you do believe in value, it will come up as the natural focal points for value, especially if you’re allowed to basically probe the quality of your experience where in many states you believe that the reason why you like something is for intentional content. Again, the case of graduating or it could be the the case of getting a promotion or one of those things that a lot of people associate, with feeling great, but if you actually probe the quality of experience, you will realize that there is this component of it which is its hedonic gloss and you can manipulate it directly again with things like opiate antagonists and if the symmetry theory of valence is true, potentially also by directly modulating the consonance and dissonance of the brain harmonics, in which case the hedonic gloss would change in peculiar ways.

When it comes to consilience, when it comes to many different points of view, agreeing on what aspect of the experience is what brings value to it, it seems to be the hedonic gloss.

Lucas: So in terms of qualia and valence realism, would the causal properties of qualia be the thing that would show any arbitrary mind the self-intimating nature of how good or bad an experience is, and in the space of all possible minds, what is the correct epistemological mechanism for evaluating the moral status of experiential or qualitative states?

Mike: So first of all, I would say that my focus so far has mostly been on describing what is and not what ought. I think that we can talk about valence without necessarily talking about ethics, but if we can talk about valence clearly, that certainly makes some questions in ethics and some frameworks in ethics make much more or less than. So the better we can clearly describe and purely descriptively talk about consciousness, the easier I think a lot of these ethical questions get. I’m trying hard not to privilege any ethical theory. I want to talk about reality. I want to talk about what exists, what’s real and what the structure of what exists is, and I think if we succeed at that then all these other questions about ethics and morality get much, much easier. I do think that there is an implicit should wrapped up in questions about valence, but I do think that’s another leap.

You can accept the valence is real without necessarily accepting that optimizing valence is an ethical imperative. I personally think, yes, it is very ethically important, but it is possible to take a purely descriptive frame to valence, that whether or not this also discloses, as David Pearce said, the utility function of the universe. That is another question and can be decomposed.

Andrés: One framing here too is that we do suspect valence is going to be the thing that matters up on any mind if you probe it in the right way in order to achieve reflective equilibrium. There’s the biggest example of a talk and neuro scientist was giving at some point, there was something off and everybody seemed to be a little bit anxious or irritated and nobody knew why and then one of the conference organizers suddenly came up to the presenter and did something to the microphone and then everything sounded way better and everybody was way happier. There was these very sorrow hissing pattern caused by some malfunction of the microphone and it was making everybody irritated, they just didn’t realize that was the source of the irritation, and when it got fixed then you know everybody’s like, “Oh, that’s why I was feeling upset.”

We will find that to be the case over and over when it comes to improving valence. So like somebody in the year 2050 might come up to one of the connectome-specific harmonic wave clinics, “I don’t know what’s wrong with me,” but if you put them through the scanner we identify your 17th and 19th harmonic in a state of dissonance. We cancel 17th to make it more clean, and then the person who will say all of a sudden like, “Yeah, my problem is fixed. How did you do that?” So I think it’s going to be a lot like that, that the things that puzzle us about why do I prefer these, why do I think this is worse, will all of a sudden become crystal clear from the point of view of valence gradients objectively measured.

Mike: One of my favorite phrases in this context is what you can measure you can manage and if we can actually find the source of dissonance in a brain, then yeah, we can resolve it, and this could open the door for maybe honestly a lot of amazing things, making the human condition just intrinsically better. Also maybe a lot of worrying things, being able to directly manipulate emotions may not necessarily be socially positive on all fronts.

Lucas: So I guess here we can begin to jump into AI alignment and qualia. So we’re building AI systems and they’re getting pretty strong and they’re going to keep getting stronger potentially creating a superintelligence by the end of the century and consciousness and qualia seems to be along the ride for now. So I’d like to discuss a little bit here about more specific places in AI alignment where these views might inform it and direct it.

Mike: Yeah, I would share three problems of AI safety. There’s the technical problem, how do you make a self improving agent that is also predictable and safe. This is a very difficult technical problem. First of all to even make the agent but second of all especially to make it safe, especially if it becomes smarter than we are. There’s also the political problem, even if you have the best technical solution in the world and the sufficiently good technical solution doesn’t mean that it will be put into action in a sane way if we’re not in a reasonable political system. But I would say the third problem is what QRI is most focused on and that’s the philosophical problem. What are we even trying to do here? What is the optimal relationship between AI and humanity and also a couple of specific details here. First of all I think nihilism is absolutely an existential threat and if we can find some antidotes to nihilism through some advanced valence technology that could be enormously helpful for reducing X-risk.

Lucas: What kind of nihilism or are you talking about here, like nihilism about morality and meaning?

Mike: Yes, I would say so, and just personal nihilism that it feels like nothing matters, so why not do risky things?

Lucas: Whose quote is it, the philosophers question like should you just kill yourself? That’s the yawning abyss of nihilism inviting you in.

Andrés: Albert Camus. The only real philosophical question is whether to commit suicide, whereas how I think of it is the real philosophical question is how to make love last, bringing value to the existence, and if you have value on tap, then the question of whether to kill yourself or not seems really nonsensical.

Lucas: For sure.

Mike: We could also say that right now there aren’t many good shelling points for global coordination. People talk about having global coordination and building AGI would be a great thing but we’re a little light on the details of how to do that. If the clear, comprehensive, useful, practical understanding of consciousness can be built, then this may sort of embody or generate new shelling points that the larger world could self organize around. If we can give people a clear understanding of what is and what could be, then I think we will get a better future that actually gets built.

Lucas: Yeah. Showing what is and what could be is immensely important and powerful. So moving forward with AI alignment as we’re building these more and more complex systems, there’s this needed distinction between unconscious and conscious information processing, if we’re interested in the morality and ethics of suffering and joy and other conscious states. How do you guys see the science of consciousness here, actually being able to distinguish between unconscious and conscious information processing systems?

Mike: There are a few frames here. One is that, yeah, it does seem like the brain does some processing in consciousness and some processing outside of consciousness. And what’s up with that, this could be sort of an interesting frame to explore in terms of avoiding things like mind crime in the AGI or AI space that if there are certain computations which are painful then don’t do them in a way that would be associated with consciousness. It would be very good to have rules of thumb here for how to do that. One interesting could be in the future we might not just have compilers which optimize for speed of processing or minimization of dependent libraries and so on, but could optimize for the valence of the computation on certain hardware. This of course gets into complex questions about computationalism, how hardware dependent this compiler would be and so on.

I think it’s an interesting and important long-term frame.

Lucas: So just illustrate here I think the ways in which solving or better understanding consciousness will inform AI alignment from present day until super intelligence and beyond.

Mike: I think there’s a lot of confusion about consciousness and a lot of confusion about what kind of thing the value problem is in AI Safety, and there are some novel approaches on the horizon. I was speaking with Stuart Armstrong the last EA global and he had some great things to share about his model fragments paradigm. I think this is the right direction. It’s sort of understanding, yeah, human preferences are insane. Just they’re not a consistent formal system.

Lucas: Yeah, we contain multitudes.

Mike: Yes, yes. So first of all understanding what generates them seems valuable. So there’s this frame in AI safety we call the complexity value thesis. I believe Eliezer came up with it in a post on Lesswrong. It’s this frame where human value is very fragile in that it can be thought of as a small area, perhaps even almost a point in a very high dimensional space, say a thousand dimensions. If we go any distance in any direction from this tiny point in this high dimensional space, then we quickly get to something that we wouldn’t think of as very valuable. And maybe if we leave everything the same and take away freedom, this paints a pretty sobering picture of how difficult AI alignment will be.

I think this is perhaps arguably the source of a lot of worry in the community, that not only do we need to make machines that won’t just immediately kill us, but that will preserve our position in this very, very high dimensional space well enough that we keep the same trajectory and that possibly if we move at all, then we may enter a totally different trajectory, that we in 2019 wouldn’t think of as having any value. So this problem becomes very, very intractable. I would just say that there is an alternative frame. The phrasing that I’m playing around with here it is instead of the complexity of value thesis, the unity of value thesis, it could be that many of the things that we find valuable, eating ice cream, living in a just society, having a wonderful interaction with a loved one, all of these have the same underlying neural substrate and empirically this is what affective neuroscience is finding.

Eating a chocolate bar activates same brain regions as a transcendental religious experience. So maybe there’s some sort of elegant compression that can be made and that actually things aren’t so starkly strict. We’re not sort of this point in a super high dimensional space and if we leave the point, then everything of value is trashed forever, but maybe there’s some sort of convergent process that we can follow that we can essentialize. We can make this list of 100 things that humanity values and maybe they all have in common positive valence, and positive valence can sort of be reverse engineered. And to some people this feels like a very scary dystopic scenario – don’t knock it until you’ve tried it – but at the same time there’s a lot of complexity here.

One core frame that the idea of qualia formalism and valence realism can offer AI safety is that maybe the actual goal is somewhat different than the complexity of value thesis puts forward. Maybe the actual goal is different and in fact easier. I think this could directly inform how we spend our resources on the problem space.

Lucas: Yeah, I was going to say that there exists standing tension between this view of the complexity of all preferences and values that human beings have and then the valence realist view which says that what’s ultimately good or certain experiential or hedonic states. I’m interested and curious about if this valence view is true, whether it’s all just going to turn into hedonium in the end.

Mike: I’m personally a fan of continuity. I think that if we do things right we’ll have plenty of time to get things right and also if we do things wrong then we’ll have plenty of time for things to be wrong. So I’m personally not a fan of big unilateral moves, it’s just getting back to this question of can understanding what is help us, clearly yes.

Andrés: Yeah. I guess one view is we could say preserve optionality and learn what is, and then from there hopefully we’ll be able to better inform oughts and with maintained optionality we’ll be able to choose the right thing. But that will require a cosmic level of coordination.

Mike: Sure. An interesting frame here is whole brain emulation. So whole brain emulation is sort of a frame built around functionalism and it’s a seductive frame I would say. If whole brain emulations wouldn’t necessarily have the same qualia based on hardware considerations as the original humans, there could be some weird lock in effects where if the majority of society turned themselves into p-zombies then it may be hard to go back on that.

Lucas: Yeah. All right. We’re just getting to the end here, I appreciate all of this. You guys have been tremendous and I really enjoyed this. I want to talk about identity in AI alignment. This sort of taxonomy that you’ve developed about open individualism and closed individualism and all of these other things. Would you like to touch on that and talk about implications here in AI alignment as you see it?

Andrés: Yeah. Yeah, for sure. The taxonomy comes from Daniel Kolak, a philosopher and mathematician. It’s a pretty good taxonomy and basically it’s like open individualism, that’s the view that a lot of meditators and mystics and people who take psychedelics often ascribe to, which is that we’re all one consciousness. Another frame is that our true identity is the light of consciousness, so to speak. So it doesn’t matter in what form it manifests, it’s always the same fundamental ground of being. Then you have the common sense view, it’s called closed individualism. You start existing when you’re born, you stop existing when you die. You’re just this segment. Some religions actually extend that into the future or past with reincarnation or maybe with heaven.

It’s the belief in ontological distinction between you and others while at the same time there is ontological continuity from one moment to the next within you. Finally you have this view that’s called empty individualism, which is that you’re just a moment of experience. That’s fairly common among physicists and a lot of people who’ve tried to formalize consciousness, often they converged on empty individualism. I think a lot of theories of ethics and rationality, like the veil of ignorance as a guide or like how do you define rational decision-making as maximizing the expected utility of yourself as an agent, all of those seem to implicitly be based on closed individualism and they’re not necessarily questioning it very much.

On the other hand, if the sense of individual identity of closed individualism doesn’t actually carve nature at its joints as a Buddhist might say, the feeling of continuity of being a separate unique entity is an illusory construction of your phenomenology that casts in a completely different light how to approach rationality itself and even self interest, right? If you start identifying with the light of consciousness rather than your particular instantiation, you will probably care a lot more about what happens to pigs in factory farms because … In so far as they are conscious they are you in a fundamental way. It matters a lot in terms of how to carve out different possible futures, especially when you get into these very tricky situations like, well what if there is mind melding or what if there is the possibility of making perfect copies of yourself?

All of these edge cases are really problematic from the common sense view of identity, but they’re not really a problem from an open individualist or empty individualist point of view. With all of this said, I do personally think there’s probably a way of combining open individualism with valence realism that gives rise to the next step in human rationality where we’re actually trying to really understand what the universe wants, so to speak. But I would say that there is a very tricky aspect here that has to do with game theory. We evolved to believe in close individualism. The fact that it’s evolutionarily adaptive is obviously not an argument for it being fundamentally true, but it does seem to be some kind of an evolutionarily stable point to believe of yourself as who you can affect the most directly in a causal way, if you define your boundary that way.

That basically gives you focus on the actual degrees of freedom that you do have, and if you think of a society of open individualists, everybody’s altruistically maximally contributing to the universal consciousness, and then you have one close individualist who is just selfishly trying to acquire power just for itself, you can imagine that one view would have a tremendous evolutionary advantage in that context. So I’m not one who just naively advocates for open individualism unreflectively. I think we still have to work out to the game theory of it, how to make it evolutionarily stable and also how to make it ethical. Open question, I do think it’s important to think about and if you take consciousness very seriously, especially within physicalism, that usually will cast huge doubts on the common sense view of identity.

It doesn’t seem like a very plausible view if you actually tried to formalize consciousness.

Mike: The game theory aspect is very interesting. You can think of closed individualism as something evolutionists produced that allows an agent to coordinate very closely with its past and future ourselves. Maybe we can say a little bit about why we’re not by default all empty individualists or open individualists. Empty individualism seems to have a problem where if every slice of conscious experience is its own thing, then why should you even coordinate with your past and future self because they’re not the same as you. So that leads to a problem of defection, and open individualism is everything is the same being so to speak than … As Andrés mentioned that allows free riders, if people are defecting, it doesn’t allow altruist punishment or any way to stop the free ride. There’s interesting game theory here and it also just feeds into the question of how we define our identity in the age of AI, the age of cloning, the age of mind uploading.

This gets very, very tricky very quickly depending on one’s theory of identity. They’re opening themselves up to getting hacked in different ways and so different theories of identity allow different forms of hacking.

Andrés: Yeah, which could be sometimes that’s really good and sometimes really bad. I would make the prediction that not necessarily open individualism in its full fledged form but a weaker sense of identity than closed individualism is likely going to be highly adaptive in the future as people basically have the ability to modify their state of consciousness in much more radical ways. People who just identify with narrow sense of identity will just be in their shells, not try to disturb the local attractor too much. That itself is not necessarily very advantageous. If the things on offer are actually really good, both hedonically and intelligence wise.

I do suspect basically people who are somewhat more open to basically identify with consciousness or at least identify with a broader sense of identity, they will be the people who will be doing more substantial progress, pushing the boundary and creating new cooperation and coordination technology.

Lucas: Wow, I love all that. Seeing closed individualism for what it was has had a tremendous impact on my life and this whole question of identity I think is largely confused for a lot of people. At the beginning you said that open individualism says that we are all one consciousness or something like this, right? For me in identity I’d like to move beyond all distinctions of sameness or differenceness. To say like, oh, we’re all one consciousness to me seems to say we’re all one electromagnetism, which is really to say the consciousness is like an independent feature or property of the world that’s just sort of a ground part of the world and when the world produces agents, consciousness is just an empty identityless property that comes along for the ride.

The same way in which it would be nonsense to say, “Oh, I am these specific atoms, I am just the forces of nature that are bounded within my skin and body” That would be nonsense. In the same way in sense of what we were discussing with consciousness there was the binding problem of the person, the discreteness of the person. Where does the person really begin or end? It seems like these different kinds of individualism have, as you said, epistemic and functional use, but they also, in my view, create a ton of epistemic problems, ethical issues, and in terms of the valence theory, if quality is actually something good or bad, then as David Pearce says, it’s really just an epistemological problem that you don’t have access to other brain states in order to see the self intimating nature of what it’s like to be that thing in that moment.

There’s a sense in which i want to reject all identity as arbitrary and I want to do that in an ultimate way, but then in the conventional way, I agree with you guys that there are these functional and epistemic issues that closed individualism seems to remedy somewhat and is why evolution, I guess selected for it, it’s good for gene propagation and being selfish. But once one sees AI as just a new method of instantiating bliss, it doesn’t matter where the bliss is. Bliss is bliss and there’s no such thing as your bliss or anyone else’s bliss. Bliss is like its own independent feature or property and you don’t really begin or end anywhere. You are like an expression of a 13.7 billion year old system that’s playing out.

The universe is just peopleing all of us at the same time, and when you get this view and you see you as just sort of like the super thin slice of the evolution of consciousness and life, for me it’s like why do I really need to propagate my information into the future? Like I really don’t think there’s anything particularly special about the information of anyone really that exists today. We want to preserve all of the good stuff and propagate those in the future, but people who seek a immortality through AI or seek any kind of continuation of what they believe to be their self is, I just see that all as misguided and I see it as wasting potentially better futures by trying to bring Windows 7 into the world of Windows 10.

Mike: This all gets very muddy when we try to merge human level psychological drives and concepts and adaptations with a fundamental physics level description of what is. I don’t have a clear answer. I would say that it would be great to identify with consciousness itself, but at the same time, that’s not necessarily super easy if you’re suffering from depression or anxiety. So I just think that this is going to be an ongoing negotiation within society and just hopefully we can figure out ways in which everyone can move.

Andrés: There’s an article I wrote it, I just called it consciousness versus replicators. That kind of gets to the heart of this issue, but that sounds a little bit like good and evil, but it really isn’t. The true enemy here is replication for replication’s sake. On the other hand, the only way in which we can ultimately benefit consciousness, at least in a plausible, evolutionarily stable way is through replication. We need to find the balance between replication and benefit of consciousness that makes the whole system stable, good for consciousness and resistant against the factors.

Mike: I would like to say that I really enjoy Max Tegmark’s general frame of you leaving this mathematical universe. One re-frame of what we were just talking about in these terms are there are patterns which have to do with identity and have to do with valence and have to do with many other things. The grand goal is to understand what makes a pattern good or bad and optimize our light cone for those sorts of patterns. This may have some counter intuitive things, maybe closed individualism is actually a very adaptive thing, in the long term it builds robust societies. Could be that that’s not true but I just think that taking the mathematical frame and the long term frame is a very generative approach.

Lucas: Absolutely. Great. I just want to finish up here on two fun things. It seems like good and bad are real in your view. Do we live in heaven or hell?

Mike: Lot of quips that come to mind here. Hell is other people, or nothing is good or bad but thinking makes it so. My pet theory I should say is that we live in something that is perhaps close to heaven as is physically possible. The best of all possible worlds.

Lucas: I don’t always feel that way but why do you think that?

Mike: This gets through the weeds of theories about consciousness. It’s this idea that we tend to think of consciousness on the human scale. Is the human condition good or bad, is the balance of human experience on the good end, the heavenly end or the hellish end. If we do have an objective theory of consciousness, we should be able to point it at things that are not human and even things that are not biological. It may seem like a type error to do this but we should be able to point it at stars and black holes and quantum fuzz. My pet theory, which is totally not validated, but it is falsifiable, and this gets into Bostrom’s simulation hypothesis, it could be that if we tally up the good valence and the bad valence in the universe, that first of all, the human stuff might just be a rounding error.

Most of the value, in this value the positive and negative valence is found elsewhere, not in humanity. And second of all, I have this list in the last appendix of Principia Qualia as well, where could massive amounts of consciousness be hiding in the cosmological sense. I’m very suspicious that the big bang starts with a very symmetrical state, I’ll just leave it there. In a utilitarian sense, if you want to get a sense of whether we live in a place closer to heaven or hell we should actually get a good theory of consciousness and we should point to things that are not humans and cosmological scale events or objects would be very interesting to point it at. This will give a much better clear answer as to whether we live in somewhere closer to heaven or hell than human intuition.

Lucas: All right, great. You guys have been super generous with your time and I’ve really enjoyed this and learned a lot. Is there anything else you guys would like to wrap up on?

Mike: Just I would like to say, yeah, thank you so much for the interview and reaching out and making this happen. It’s been really fun on our side too.

Andrés: Yeah, I think wonderful questions and it’s very rare for an interviewer to have non conventional views of identity to begin with, so it was really fun, really appreciate it.

Lucas: Would you guys like to go ahead and plug anything? What’s the best place to follow you guys, Twitter, Facebook, blogs, website?

Mike: Our website is qualiaresearchinstitute.org and we’re working on getting a PayPal donate button out but in the meantime you can send us some crypto. We’re building out the organization and if you want to read our stuff a lot of it is linked from the website and you can also read my stuff at my blog, opentheory.net and Andrés’ is @qualiacomputing.com.

Lucas: If you enjoyed this podcast, please subscribe, give it a like or share it on your preferred social media platform. We’ll be back again soon with another episode in the AI Alignment series.


Featured image credit: Alex Grey

The Pseudo-Time Arrow: Explaining Phenomenal Time With Implicit Causal Structures In Networks Of Local Binding

At this point in the trip I became something that I can not put into words… I became atemporal. I existed without time… I existed through an infinite amount of time. This concept is impossible to comprehend without having actually perceived it. Even now in retrospect it is hard to comprehend it. But I do know that I lived an eternity that night… 

 

– G.T. Currie. “Impossible to Understand Reality: An Experience with LSD

Time distortion is an effect that makes the passage of time feel difficult to keep track of and wildly distorted.

 

PsychonautWiki

Introduction

What is time? When people ask this question it is often hard to tell what they are talking about. Indeed, without making explicit one’s background philosophical assumptions this question will usually suffer from a lot of ambiguity. Is one talking about the experience of time? Or is one talking about the physical nature of time? What sort of answer would satisfy the listener? Oftentimes this implicit ambiguity is a source of tremendous confusion. Time distortion experiences deepen the mystery; the existence of exotic ways of experiencing time challenges the view that we perceive the passage of physical time directly. How to disentangle this conundrum?

Modern physics has made enormous strides in pinning down what physical time is. As we will see, one can reduce time to causality networks, and causality to patterns of conditional statistical independence. Yet in the realm of experience the issue of time remains much more elusive.

In this article we provide a simple explanatory framework that accounts for both the experience of time and its relation to physical time. We then sketch out how this framework can be used to account for exotic experiences of time. We end with some thoughts pertaining the connection between the experience of time and valence (the pleasure-pain axis), which may explain why exotic experiences of the passage of time are frequently intensely emotional in nature.

To get there, let us first lay out some key definitions and background philosophical assumptions:

Key Terminology: Physical vs. Phenomenal Time

Physical Time: This is the physical property that corresponds to what a clock measures. In philosophy of time we can distinguish between eternalism and presentism. Eternalism postulate that time is a geometric feature of the universe, best exemplified with “block universe” metaphor (i.e. where time is another dimension alongside our three spatial dimensions). Presentism, instead, postulates that only the present moment is real; the past and the future are abstractions derived from the way we experience patterns in sequences of events. The present is gone, and the future has yet to come.

Now, it used to be thought that there was a universal metronome that dictated “what time it is” in the universe. With this view one could reasonably support presentism as a viable account of time. However, ever since Einstein’s theory of relativity was empirically demonstrated we now know that there is no absolute frame of reference. Based on the fundamental unity of space and time as presented by general relativity, and the absence of an absolute frame of reference, we find novel interesting arguments in favor of eternalism and against presentism (e.g. the Rietdijk–Putnam argument). On the other hand, presentists have rightly argued that the ephemeral nature of the present is self-revealing to any subject of experience. Indeed, how can we explain the feeling of the passage of time if reality is in fact a large geometric “static” structure? While this article does not need to take sides between eternalism and presentism, we will point out that the way we explain the experience of time will in turn diminish the power of presentist arguments based on the temporal character of our experience.

Phenomenal Time: This is the way in which the passing of time feels like. Even drug naïve individuals can relate to the fact that the passage of time feels different depending on one’s state of mind. The felt sense of time depends on one’s level of arousal (deeply asleep, dreaming, tired, relaxed, alert, wide awake, etc.) and hedonic tone (depressed, anxious, joyful, relaxed, etc.). Indeed, time hangs heavy when one is in pain, and seems to run through one’s fingers when one is having a great time. More generally, when taking into account altered states of consciousness (e.g. meditation, yoga, psychedelics) we see that there is a wider range of experiential phenomena than is usually assumed. Indeed, one can see that there are strange generalizations to phenomenal time. Examples of exotic phenomenal temporalities include: tachypsychia (aka. time dilation), time reversal, short-term memory tracers, looping, “moments of eternity“, temporal branching, temporal synchronicities, timelessness, and so on. We suggest that any full account of consciousness ought to be able to explain all of these variants of phenomenal time (among other key features of consciousness).

Key Background Assumptions

We shall work under three key assumptions. First, we have indirect realism about perception. Second, we have mereological nihilism in the context of consciousness, meaning that one’s stream of consciousness is composed of discrete “moments of experience”. And third, Qualia Formalism, a view that states that each moment of experience has a mathematical structure whose features are isomorphic to the features of the experience. Let us unpack these assumptions:

1. Indirect Realism About Perception

This view also goes by the name of representationalism or simulationism (not to be confused with the simulation hypothesis). In this account, perception as a concept is shown to be muddled and confused. We do not really perceive the world per se. Rather, our brains instantiate a world-simulation that tracks fitness-relevant features of our environment. Our sensory apparatus merely selects which specific world-simulation our brain instantiates. In turn, our world-simulations causally covaries with the input our senses receive and the motor responses it elicits. Furthermore, evolutionary selection pressures, in some cases, work against accurate representations of one’s environment (so long as these are not fitness-enhancing). Hence, we could say that our perception of the world is an adaptive illusion more than an accurate depiction of our surroundings.

A great expositor of this view is Steve Lehar. We recommend his book about how psychonautical experience make clear the fact that we inhabit (and in some sense are) a world-simulation created by our brain. Below you can find some pictures from his “Cartoon Epistemology“, which narrates a dialogue between a direct and an indirect realist about perception:

This slideshow requires JavaScript.

Steve Lehar also points out that the very geometry of our world-simulation is that of a diorama. We evolved to believe that we can experience the world directly, and the geometry of our world-simulation is very well crafted to keep us under the influence of a sort of spell to makes us believe we are the little person watching the diorama. This world-simulation has a geometry that is capable of representing both nearby regions and far-away objects (and even points-at-infinity), and it represents the subject of experience with a self-model at its projective center.

We think that an account of how we experience time is possible under the assumption that experiential time is a structural feature of this world-simulation. In turn, we would argue that implicit direct realism about perception irrevocably confuses physical time and phenomenal time. For if one assumes that one somehow directly perceives the physical world, doesn’t that mean that one also perceives time? But in this case, what to make of exotic time experiences? With indirect realism we realize that we inhabit an inner world-simulation that causally co-varies with features of the environment and hence resolve to find the experience of time within the confines of one’s own skull.

2. Discrete Moments of Experience

A second key assumptions is that experiences are ontologically unitary rather than merely functionally unitary. The philosophy of mind involved in this key assumption is unfortunately rather complex and easy to misunderstand, but we can at least say the following. Intuitively, as long as one is awake an alert, it feels like one’s so-called “stream of consciousness” is an uninterrupted and continuous experience. Indeed, at the limit, some philosophers have even argued that one is a different person each day; subjects of experience are, as it were, delimited by periods of unconsciousness. We instead postulate that the continuity of experience from one moment to the next is an illusion caused be the way experience is constructed. In reality, our brains generate countless “moments of experience” every second, each with its own internal representation of the passage of time and the illusion of a continuous diachronic self.

Contrast this discretized view of experience with deflationary accounts of consciousness (which insist that there is no objective boundary that delimits a given moment of experience) and functionlist accounts of consciousness (which would postulate that experience is smeared across time over the span of hundreds of milliseconds).

The precise physical underpinnings of a moment of experience have yet to be discovered, but if monistic physicalism is to survive, it is likely that the (physical) temporal extension that a single moment of experience spans is incredibly thin (possibly no more than 10^-13 seconds). In this article we make no assumptions about the actual physical temporal extension of a moment of experience. All we need to say is that it is “short” (most likely under a millisecond).

It is worth noting that the existence of discrete moments of experience supports an Empty Individualist account of personal identity. That is, a person’s brain works as an experience machine that generates many conscious events every second, each with its own distinct coordinates in physical space-time and unique identity. We would also argue that this ontology may be compatible with Open Individualism, but the argument for this shall be left to a future article.

3. Qualia Formalism

This third key assumption states that the quality of all experiences can be modeled mathematically. More precisely, for any given moment of experience, there exists a mathematical object whose mathematical features are isomorphic the the features of the experience. At the Qualia Research Institute we take this view and run with it to see where it takes us. Which mathematical object can fully account for the myriad structural relationships between experiences is currently unknown. Yet, we think that we do not need to find the One True Mathematical Object in order to make progress in formalizing the structure of subjective experience. In this article we will simply invoke the mathematical object of directed graphs in order to encode the structure of local binding of a given experience. But first, what is “local binding”? I will borrow David Pearce’s explanation of the terms involved:

The “binding problem”, also called the “combination problem”, refers to the mystery of how the micro-experiences mediated by supposedly discrete and distributed neuronal edge-detectors, motion-detectors, shape-detectors, colour-detectors, etc., can be “bound” into unitary experiential objects (“local” binding) apprehended by a unitary experiential self (“global” binding). Neuroelectrode studies using awake, verbally competent human subjects confirm that neuronal micro-experiences exist. Classical neuroscience cannot explain how they could ever be phenomenally bound. As normally posed, the binding problem assumes rather than derives the emergence of classicality.

 

Non-Materialist Physicalism by David Pearce

In other words, “local binding” refers to the way in which the features of our experience seem to be connected and interwoven into complex phenomenal objects. We do not see a chair as merely a disparate set of colors, edges, textures, etc. Rather, we see it as an integrated whole with fine compositional structure. Its colors are “bound” to its edges which are “bound” to its immediate surrounding space and so forth.

A simple toy model for the structure of an experience can be made by saying that there are “simple qualia” such as color and edges, and “complex qualia” formed by the binding of simple qualia. In turn, we can represent an experience as a graph where each node is a simple quale and each edge is a local binding connection. The resulting globally connected graph corresponds to the “globally bound” experience. Each “moment of experience” is, thus, coarsely at any rate, a network.

While this toy model is almost certainly incomplete (indeed some features of experience may require much more sophisticated mathematical objects to be represented properly), it is fair to say that the rough outline of our experience can be represented with a network-like skeleton encoding the local binding connections. More so, as we will see, this model will suffice to account for many of the surprising features of phenomenal time (and its exotic variants).

Timeless Causality

While both physical and phenomenal time pose profound philosophical conundrums, it is important to denote that science has made a lot of progress providing formal accounts of physical time. Confusingly, even Einstein’s theory of general relativity is time-symmetric, meaning that the universe would behave the same whether time was moving forwards or backwards. Hence relativity does not provide, on its own, a direction to time. What does provide a direction to time are properties like the entropy gradient (i.e. the direction along which disorder is globally increasing) and, the focus of this article, causality as encoded in the network of statistical conditional independence. This is a mouthful, let us tackle it in more detail.

In Timeless Causality Yudkowsky argues one can tell the direction of causality, (and hence of the arrow of time) by examining how conditioning on events inform us about other events. We recommend reading the linked article for details (and for a formal account read SEP’s entry on the matter).

In the image above we have a schematic representation of two measurables (1 & 2) at several times (L, M, and R). The core idea is that we can determine the flow of causality by examining the patterns of statistical conditional independence, with questions like “if I’ve observed L1 and L2, do I gain information about M1 by learning about M2?” an so on*.

Along the same lines Wolfram has done research on how time may emerge in rule-based network modifications automata:

image-xlarge

Intriguingly, these models of time and causality are tenseless and hence eternalist. The whole universe works as a unified system in which time appears as an axis rather than a metaphysical universal metronome. But if eternalism is true, how come we can feel the passage of time? If moments of experience exist, how come we seem to experience movement and action? Shouldn’t we experience just a single static “image”, like seeing a single movie frame without being aware of the previous ones? We are now finally ready tackle these questions and explain how time may be encoded in the structure of one’s experience.

Pseudo-Time Arrow

pseudo_time_arrow_illustrated_1

Physical Time vs. Phenomenal Time (video source)

In the image above we contrast physical and phenomenal time explicitly. The top layer shows the physical state of a scene in which a ball is moving along a free-falling parabolic trajectory. In turn, a number of these states are aggregated by a process of layering (second row) into a unified “moment of experience”. As seen on the third row, each moment of experience represents the “present scene” as the composition of three slices of sensory input with a time-dependent dimming factor. Namely, the scene experienced is approximated with a weighted sum of three scenes with the most recent one being weighted the highest and the oldest the least.

In other words, at the coarsest level of organization time is encoded by layering the current input scene with faint after-images of very recent input scenes. In healthy people this process is rather subtle yet always present. Indeed, after-images are an omnipresent feature of sensory modalities (beyond sight).

A simple model to describe how after-images are layered on top of each other to generate a scene with temporal depth involves what we call “time-dependent qualia decay functions”. This function determines how quickly sensory (and internal) impressions fade over time. With e.g. psychedelics making this decay function significantly fatter (long-tailed) and stimulants making it slightly shorter (i.e. higher signal-to-noise ratio at the cost of reduced complex image formation).

With this layering process going on, and the Qualia Formalist model of experience as a network of local binding, we can further find a causal structure in experience akin to that in physical time (as explained in Timeless Causality):

Again, each node of the network represents a simple quale and each edge represents a local binding relationship between the nodes it connects. Then, we can describe the time-dependent qualia decay function as the probability that a node or an edge will vanish at each (physical) time step.

sober_pseudo_time_arrow_1

The rightmost nodes and edges are the most recent qualia triggered by sensory input. Notice how the nodes and edges vanish probabilistically with each time step, making the old layers sparsely populated.

With a sufficiently large network one would be able to decode the direction of causality (and hence of time) using the same principles of statistical conditional independence used to account for physical time. What we are proposing is that this underlies what time feels like.

Now that we understand what the pseudo-time arrow is, what can we do with it?

Explanatory Power: How the Pseudo-Time Arrow Explains Exotic Phenomenal Time

Let us use this explanatory framework on exotic experiences of time. That is, let us see how the network of local binding and its associated pseudo-time arrows can explain unusual experiences of time perception.

To start we should address the fact that tachypsychia (i.e. time dilation) could either mean (a) that “one experiences time passing at the same rate but that this rate moves at a different speed relative to the way clocks tick compared to typical perception” or, more intriguingly, (b) that “time itself feels slower, stretched, elongated, etc.”.

The former (a) is very easy to explain, while the latter requires more work. Namely, time dilation of the former variety can be explained by an accelerated or slowed down sensory sampling rate in such a way that the (physical) temporal interval between each layer is either longer or shorter than usual. In this case the structure of the network does not change; what is different is how it maps to physical time. If one were on a sensory deprivation chamber and this type of time dilation was going on one would not be able to say so since the quality of phenomenal time (as encoded in the network of local binding) remains the same as before. Perhaps compare how it feels like to see a movie in slow-motion relative to seeing it at its original speed while being perfectly sober. Since one is sober either way, what changes is how quickly the world seems to move, not how one feels inside.

The latter (b) is a lot more interesting. In particular, phenomenal time is often incredibly distorted when taking psychedelics in a way that is noticeable even in sensory deprivation chambers. In other words, it is the internal experience of the passage of time that changes rather than the layering rate relative to the external world. So how can we explain that kind of phenomenal time dilation?

Psychedelics

The most straightforward effect of psychedelics one can point out with regards to the structure of one’s experience is the fact that qualia seems to last for much longer than usual. This manifests as “tracers” in all sensory modalities. Using the vocabulary introduced above, we would say that psychedelics change the time-dependent qualia decay function by making it significantly “fatter”. While in sober conditions the positive after-image of a lamp will last between 0.2 and 1 second, on psychedelics it will last anywhere between 2 and 15 seconds. This results in a much more pronounced and perceptible change in the layering process of experience. Using Lehar’s diorama model of phenomenal space, we could represent various degrees of psychedelic intoxication with the following progression:

The first image is what one experiences while sober. The second is what one experiences if one takes, e.g. 10 micrograms of LSD (i.e. microdosing), where there is a very faint additional layer but is at times indistinguishable from sober states. The third, fourth, and fifth image represent what tracers may feel like on ~50, ~150, and ~300 micrograms of LSD, respectively. The last image is perhaps most reminiscent of DMT experiences, which provide a uniquely powerful and intense high-frequency layering at the onset of the trip.

In the graphical model of time we could say that the structure of the network changes by (1) a lower probability for each node to vanish in each (physical) time step, and (2) an even lower probability for each edge to vanish after each (physical) time step. The tracers experienced on psychedelics are more than just a layering process; the density of connections also increases. That is to say, while simple qualia lasts for longer, the connections between them are even longer-lasting. The inter-connectivity of experience is enhanced.

low_dose_lsd_pseudo_time_arrow

A low dose of a psychedelic will lead to a slow decay of simple qualia (colors, edges, etc.) and an even slower decay of connections (local binding), resulting in an elongated and densified pseudo-time arrow.

This explains why time seems to move much more slowly on psychedelics. Namely, each moment of experience has significantly more temporal depth than a corresponding sober state. To illustrate this point, here is a first-person account of this effect:

A high dose of LSD seems to distort time for me the worst… maybe in part because it simply lasts so long. At the end of an LSD trip when i’m thinking back on everything that happened my memories of the trip feel ancient.

When you’re experiencing the trip it’s possible to feel time slowing down, but more commonly for me I get this feeling when I think back on things i’ve done that day. Like “woah, remember when I was doing this. That feels like it was an eternity ago” when in reality it’s been an hour.

 

Shroomery user Subconscious in the tread “How long can a trip feel like?

On low doses of psychedelics, phenomenal time may seem to acquire a sort of high definition unusual for sober states. The incredible (and accurate) visual acuity of threshold DMT experiences is a testament to this effect, and it exemplifies what a densified pseudo-time arrow feels like:

SONY DSC

Just as small doses of DMT enhance the definition of spatial structures, so is the pseudo-time arrow made more regular and detailed, leading to a strange but compelling feeling of “HD vision”.

But this is not all. Psychedelics, in higher doses, can lead to much more savage and surrealistic changes to the pseudo-time arrow. Let us tackle a few of the more exotic variants with this explanatory framework:

Time Loops

This effect feels like being stuck in a perfectly-repeating sequence of events outside of the universe in some kind of Platonic closed timelike curve. People often accidentally induce this effect by conducting repetitive tasks or listening to repetitive sounds (which ultimately entrain this pattern). For most people this is a very unsettling experience since it produces a pronounce feeling of helplessness due to making you feel powerless about ever escaping the loop.

In terms of the causal network, this experience could be accounted for with a loop in the pseudo-time arrow of experience:

high_dose_lsd_infinite

High Dose LSD can lead to annealing and perfect “standing temporal waves” often described as “time looping” or “infinite time”

Moments of Eternity

Subjectively, so-called “Moments of Eternity” are extremely bizarre experiences that have the quality of being self-sustaining and unconditioned. It is often described in mystical terms, such as “it feels like one is connected to the eternal light of consciousness with no past and no future direction”. Whereas time loops lack some of the common features of phenomenal time such as a vanishing past, moments of eternity are even more alien as they also lack a general direction for the pseudo-time arrow.

high_dose_lsd_moment_of_eternity

High Dose LSD may also generate a pseudo-time arrow with a central source and sink to that connects all nodes.

Both time loops and moments of eternity arise from the confluence of a slower time-dependent qualia decay function and structural annealing (which is typical of feedback). As covered in previous posts, as depicted in numerous psychedelic replications, and as documented in PsychonautWiki, one of the core effects of psychedelics is to lower the symmetry detection threshold. Visually, this leads to the perception of wallpaper symmetry groups covering textures (e.g. grass, walls, etc.). But this effect is much more general than mere visual repetition; it generalizes to the pseudo-time arrow! The texture repetition via mirroring, gyrations, glides, etc. works indiscriminately across (phenomenal) time and space. As an example of this, consider the psychedelic replication gifs below and how the last one nearly achieves a standing-wave structure. On a sufficient dose, this can anneal into a space-time crystal, which may have “time looping” and/or “moment of eternity” features.

oscillation_1_5_5_75_5_1_10_0.05_signal_

Sober Input

Temporal Branching

As discussed in a previous post, a number of people report temporal branching on high doses of psychedelics. The reported experience can be described as simultaneously perceiving multiple possible outcomes of a given event, and its branching causal implications. If you flip a coin, you see it both coming up heads and tails in different timelines, and both of these timelines become superimposed in your perceptual field. This experience is particularly unsettling if one interprets it through the lens of direct realism about perception. Here one imagines that the timelines are real, and that one is truly caught between branches of the multiverse. Which one is really yours? Which one will you collapse into? Eventually one finds oneself in one or another timeline with the alternatives having been pruned. An indirect realist about perception has an easier time dealing with this experience as she can interpret it as the explicit rendering of one’s predictions about the future in such a way that they interfere with one’s incoming sensory stimuli. But just in case, in the linked post we developed an empirically testable predictions from the wild possibility (i.e. where you literally experience information from adjacent branches of the multiverse) and tested it using quantum random number generators (and, thankfully for our collective sanity, obtained null results).

high_dose_lsd_branching

High Dose LSD Pseudo-Time Arrow Branching, as described in trip reports where people seem to experience “multiple branches of the multiverse at once.”

Timelessness

Finally, in some situations people report the complete loss of a perceived time arrow but not due to time loops, moments of eternity, or branching, but rather, due to scrambling. This is less common on psychedelics than the previous kinds of exotic phenomenal time, but it still happens, and is often very disorienting and unpleasant (an “LSD experience failure mode” so to speak). It is likely that this also happens on anti-psychotics and quite possibly with some anti-depressants, which seem to destroy unpleasant states by scrambling the network of local binding (rather than annealing it, as with most euphoric drugs).

pseudo_time_arrow_loss

Loss of the Pseudo-Time Arrow (bad trips? highly scrambled states caused by anti-psychotics?)

In summary, this framework can tackle some of the weirdest and most exotic experiences of time. It renders subjective time legible to formal systems. And although it relies on an unrealistically simple formalism for the mathematical structure of consciousness, the traction we are getting is strong enough to make this approach a promising starting point for future developments in philosophy of time perception.

We will now conclude with a few final thoughts…

Hyperbolic Geometry

Intriguingly, with compounds such as DMT, the layering process is so fast that on doses above the threshold level one very quickly loses track of the individual layers. In turn, one’s mind attempts to bind together the incoming layers, which leads to attempts of stitching together multiple layers in a small (phenomenal) space. This confusion between layers compounded with a high density of edges is the way we explained the unusual geometric features of DMT hallucinations, such as the spatial hyperbolic symmetry groups expressed in its characteristic visual texture repetition (cf. eli5). One’s mind tries to deal with multiple copies of e.g. the wall in front, and the simplest way to do so is to stitch them together in a woven Chrysanthemum pattern with hyperbolic wrinkles.

Implementation Level of Abstraction

It is worth noting that this account of phenomenal time lives at the algorithmic layer of Marr’s levels of abstraction, and hence is an algorithmic reduction (cf. Algorithmic Reduction of Psychedelic States). A full account would also have to deal with how these algorithmic properties are implemented physically. The point being that a phenomenal binding plus causal network account of phenomenal time work as an explanation space whether the network itself is implemented with connectome-specific harmonic wavesserotonergic control-interruption, or something more exotic.

Time and Valence

Of special interest to us is the fact that both moments of eternity and time loops tend to be experienced with very intense emotions. One could imagine that finding oneself in such an altered state is itself bewildering and therefore stunning. But there are many profoundly altered states of consciousness that lack a corresponding emotional depth. Rather, we think that this falls out of the very nature of valence and the way it is related to the structure of one’s experience.

In particular, the symmetry theory of valence (STV) we are developing at the Qualia Research Institute posits that the pleasure-pain axis is a function of the symmetry (and anti-symmetry) of the mathematical object whose features are isomorphic to an experience’s phenomenology. In the case of the simplified toy model of consciousness based on the network of local binding connections, this symmetry may manifest in the form of regularity within and across layers. Both in time loops and moments of eternity we see a much more pronounced level of symmetry of this sort than in the sober pseudo-time arrow structure. Likewise, symmetry along the pseudo-time arrow may explain the high levels of positive valence associated with music, yoga, orgasm, and concentration meditation. Each of these activities would seem to lead to repeating standing waves along the pseudo-time arrow, and hence, highly valence states. Future work shall aim to test this correspondence empirically.

QRIalpha (1)

The Qualia Research Institute Logo (timeless, as you can see)


* As Yudkowsky puts it:

causeright_2

Suppose that we do know L1 and L2, but we do not know R1 and R2. Will learning M1 tell us anything about M2? […]

The answer, on the assumption that causality flows to the right, and on the other assumptions previously given, is no. “On each round, the past values of 1 and 2 probabilistically generate the future value of 1, and then separately probabilistically generate the future value of 2.” So once we have L1 and L2, they generate M1 independently of how they generate M2.

But if we did know R1 or R2, then, on the assumptions, learning M1 would give us information about M2. […]

Similarly, if we didn’t know L1 or L2, then M1 should give us information about M2, because from the effect M1 we can infer the state of its causes L1 and L2, and thence the effect of L1/L2 on M2.



Thanks to: Mike Johnson, David Pearce, Romeo Stevens, Justin Shovelain, Andrés Silva Ruiz, Liam Brereton, and Enrique Bojorquez for their thoughts about phenomenal time and its possible mathematical underpinnings. And to Alfredo Valverde for pointing me to the Erlangen program, wh

Qualia Research Institute presentations at The Science of Consciousness 2018 (Tucson, AZ)

As promised, here are the presentations Michael Johnson and I gave in Tucson last week to represent the Qualia Research Institute.

Here is Michael’s presentation:

And here is my presentation:


On a related note:

  1. Ziff Davis PCMag published an interview with me in anticipation of the conference.
  2. An ally of QRI, Tomas Frymann, gave a wonderful presentation about Open Individualism titled “Consciousness as Interbeing: Identity on the Other Side of Self-Transcendence
  3. As a bonus, here is the philosophy of mind stand-up comedy sketch I performed at their Poetry Slam, which took place on Friday night (you should likewise check out their classic Zombie Blues).

From Point-of-View Fragmentation to Global Visual Coherence: Harmony, Symmetry, and Resonance on LSD

Excerpt from The Grand Illusion: A Psychonautical Odyssey Into the Depths of Human Experience by the cognitive scientist Steven Lehar (2010; pages 23-40).

Trip to Europe

I had two or three such experiences on my new batch of LSD, taking perhaps 2 or 3 “hits” (tabs) each time (presumed to be about [100] micrograms, or “mikes” per tab). And each time the experience became somewhat more familiar, and I learned to think more clearly under its influence. In July 1990 I took a trip to Europe with Tim, a colleague from work, because we were both presenting posters at a neural network conference in Paris, and the company where we worked very kindly funded the travel expenses. Tim and I took this opportunity to plan a little excursion around Europe after the conference, visiting Germany, Austria, Italy, and Switzerland touring in a rented car. When we got to Austria we bought a little tent at a camping store, then we hiked up an enormous mountain in the Alps, and spent the day sightseeing at the top. When I told Tim that I happened to have some LSD with me, his eyes lit up. It turns out he too had been a hippy in his youth, and had even attended the original Woodstock, so he immediately warmed to the idea of taking LSD with me on a mountain top, although he had not done psychedelic drugs in over a decade. So there in the most stupendous and idyllic setting of a mountain in the Austrian alps, early the next morning after camping overnight, we consumed five hits of LSD each, and spent the day in profound wonder at the glory of creation!

I made a few new and interesting discoveries on that mountain top in Austria. First of all, I learned to have a great deal more control of the experience in the following manner. I discovered that the effects of LSD become markedly stronger and more pronounced when you sit still and stare, and clear your mind, much like a state of zen meditation, or pre-hypnotic relaxation. When you do this under LSD, the visual world begins to break up and fragment in a most astonishing way. You tend to lose all sense of self, that is, you lose the distinction between self and non-self. This can be a very alarming experience for those who are prone to panic or anxiety, or for those who insist on maintaining a level of control and awareness of themselves and the world around them. But I also discovered that this mental dissociation and visual confusion can be diminished, and normal consciousness can be largely restored by simply looking around, moving about, and interacting actively with the world around you. Because when you do this, suddenly the world appears as a solid and stable structure again, and your familiar body reappears where it belongs at the center of your world of experience. This discovery greatly enhanced my ability to explore the deeper spaces of consciousness revealed by the drug, while providing an insurance against the natural panic that tends to arise with the dissolusion of the self, and the world around you. It allowed me to descend into the depths of the experience while maintaining a life line back to consensual reality, like a spelunker descending into the bowels of the deep underground cavern of my mind, while always able to return safely to the surface. And what a splendid and magnificent cavern it was that I discovered within my mind!

One of the most prominent aspects of consciousness that has puzzled philosophers and psychologists for centuries is the unity of conscious experience. We feel that we live in a world that surrounds our body, and that world appears as a single “picture” or volumetric spatial structure, like a theatre set, every piece of which takes its proper place in the panorama of surrounding experience. It has always been somewhat difficult to grasp this notion of conscious unity, because it is difficult to even conceptualize the alternative. What would consciousness be like if it were not unified? What does that even mean? Under LSD you can discover what non-unified consciousness is like for yourself, and that in turn by contrast offers profound insights as to the nature and meaning of unified consciousness. Again, the most interesting revelations of the psychedelic experience are not confined to that experience itself, but they reveal insights into the nature of normal conscious experience that might otherwise be missed due to its familiarity. In fact, I realized much later, even normal consciousness has aspects which are not unified.

The most familiar example of non-unified consciousness is seen in binocular vision. Under normal conditions the two eyes view the same scene and produce a three-dimensional “picture” in the mind that is a unified construction based on the information from both eyes simultaneously. But everyone knows the experience of double vision. For those with greater control over their own visual function, double vision is easily achieved by simply staring into space and relaxing the eyes. As a vision scientist myself, I have trained myself to do this so as to be able to “free fuse” a binocular pair of left-eye, right-eye images to create the perception of a 3D scene. For those who have difficulty with this, a similar experience can be had by holding a small mirror at an angle close in front of one eye, so as to send very different images into the two eyes. Whichever way you do it, the result is rather unremarkable in its familiarity, and yet when you think of it, this is in fact an example of disunity of conscious experience that is familiar to one and all. For what you see in double vision is actually two visual experiences which are seen as if they are superimposed in some manner, and yet at the same time they are also experienced each in its own separate disconnected space. It is generally possible to observe the correspondence between these two disconnected visual experiences, for example to determine which point in one eye view relates to a particular point in the other, as if viewing two slide transparencies that are overlaid on top of one another, although this correspondence is shifting and unstable, as the vergence between your two eyes tends to wander when binocular fusion is broken. But in fact it is more natural to simply ignore that correspondence and to view the two visual experiences as separate and disconnected spaces that bear no significant spatial relation to each other. When the images in our two eyes do not correspond, we tend to focus on one while ignoring the other, like an experienced marksman who no longer has to close his idle eye while aiming a gun. And yet, although the image from the idle eye is generally ignored, it has not left consciousness entirely, and with an effort, or perhaps more accurately, with an absence of effort or focus, it is possible to experience both views simultaneously.

In the trance-like state of yoga-like meditation performed under LSD, the entire visual world breaks up and fragments in this manner into a multitude of disconnected parallel conscious experiences, each one only loosly related spatially to the other experiences in the visual field. The effect is much enhanced by the fact that your eyes actually diverge or relax in this mental state, as they do under binocular fission, and this helps trigger the state of visual confusion as your mind gives up on trying to make sense of what it is seeing. As in Zen meditation, the LSD trance state is a passive or receptive state of consciousness that allots equal attention, or perhaps lack of attention, to all components of experience, which is why they appear in parallel as separate disconnected pieces. The state of normal active consciousness resists this kind of parallel confusion, and tends to select and focus on the the most significant portion, like the marksman aiming a gun, suppressing alternative experiences such as the view from the idle eye.

The deep LSD-induced trance state can be easily broken by simply moving the eyes, so conversely, the deeper states are achieved by complete mental and physical relaxation, with glazed eyes staring blankly into space. But of all the separate fragments of visual experience observed in this mental state, there is one special fragment located at the very center of the visual field, the foveal center, that appears somewhat sharper and clearer than the rest of the visual field. In fact, the visual fragmentation is somewhat like a kind of tunnel vision in which the peripheral portions of the visual field break off and disconnect from this central portion of the experience. But while the peripheral fragments become separated from the whole, they are never entirely and completely independent, but appear to interact with each other, and especially with the central foveal image in characteristic ways. For example if the foveal image shows a couple of blades of grass, twitching and dancing in the wind, then if any of the peripheral fragments of visual experience happen to show a similar image, i.e. blades of grass at a similar angle and twitching and dancing in synchrony with those in the foveal view, then the central and peripheral images become instantly coupled into a larger unified perceptual experience of a global motion sweeping through the image. Instead of a million blades of grass each twitching individually, we perceive the invisible wind as a wave of synchronous motion that sweeps invisibly across the blades of grass. The waves of motion caused by the wind are perceived as waves of energy across the visual field, a perceptual experience of something larger than the individual grass blades that collectively give rise to it. By careful adjustment of my state of relaxation, I found I could relax until the visual world fragmented into a million independent experiences, and I could gently bring it back into focus, as first a few, and then ever more of the fragmented visual experiences coupled together into fewer separate, and eventually a single unified global experience, much like the moment of binocular fusion when the two monocular images finally lock into each other to produce a single binocular experience.

When the visual world was locked into a unified perceptual experience, even then there were instabilities in local portions of the scene. A little detail seen in distant trees appears first as a mounted horseman, then pops abruptly into a hand with three fingers extended, then to a duck on a branch, then back to the mounted horseman, all the while the actual shape and color perceived remain unchanged, it is only the interpretation, or visual understanding of that pattern that switches constantly, as when a child sees mountains and castles in the clouds. One of the many possible interpretations is of a dead tree with leafless branches, (the veridical percept of what was actually there) and that is the only alternative that enters consciouseness under normal circumstances. The effect of LSD is to make the visual system more tolerant of obvious contradictions in the scene, such as a giant horseman frozen in a line of trees. The effect is like those surrealistic Dali paintings, for example the Three Ages of Man, shown in Figure 2.1, where one sees a single coherent scene, local parts of which spontaneously invert into some alternative interpretation. This is very significant for the nature of biological vision, for it shows that vision involves a dynamic relaxation process whose stable states represent the final perceptual interpretation.

lehar_2_!

There was another interesting observation that I made that day. I noticed that under LSD things appear a little more regular and geometrical than they otherwise do. It is not the shape of things that is different under LSD, but rather the shape of the things we see in objects. For example a cloud is about as irregular and fragmented a shape as a shape can be, and yet we tend to see clouds in a simplified cartoon manner, as a little puff composed of simple convex curves. A real cloud under closer inspection reveals a ragged ugly appearance with very indefinite boundaries and irregular structure. Under LSD the cloud becomes even more regular than usual. I began to see parts of the cloud as regular geometrical shapes, seeing the shapes in the shapes of the cloud as if on a transparent overlay.

Another rather astonishing observation of the LSD experience was that the visual world wavered and wobbled slowly as if the visual scene was painted on an elastic canvas that would stretch over here while shrinking over there, with great waves of expansion and contraction moving slowly across the scene, as if the whole scene was “breathing”, with its component parts in constant motion relative to each other. This was perhaps the most compelling evidence that the world of experience is not the solid stable world that it portrays. Figure 2.2 shows a sketch I made shortly after my alpine mountain adventure to try to express the wavery elasticity and the visual regularity I had observed under LSD. This picture is of course an exaggeration, more of an impression than a depiction of how the experience actually appeared.

lehar_2_2

The geometrical regularity was particularly prominent in peripheral vision, when attending to the periphery without looking to see what is there. Usually peripheral vision is hardly noticed, giving the impression of a homogeneous visual field, but under LSD the loss of resolution in peripheral vision becomes more readily apparent, especially when holding a fixed and glassy stare. And in that periphery, objects like trees or shrubs appear more regular and geometrical than they do in central vision, like artificial Christmas trees with perfectly regular spacing of brances and twigs. Again, it was not the raw image in the periphery that appeared regular or geometrical, but rather it was the invisible skeleton of visual understanding derived from that raw colored experience that exhibits the more regular features. And suddenly I could see it. This is the way the visual system encodes visual form in a compact or compressed manner, by expressing shape in terms of the next nearest regular geometrical form, or combination of forms. Children draw a tree as a circular blob of leaves on top of a straight vertical trunk, or a pine tree as a green triangle with saw-tooth sides. It is not that we see trees in those simplified forms, but rather that we see those simplified forms in the trees, and the forms that we perceive in these invisible skeletons are the expression of our understanding of the shapes we perceive those more irregular forms to have. This was later to turn into my harmonic resonance theory of the brain, as I sought an explanation for this emerging regularity in perception, but in 1990 all I saw was the periodicity and the symmetry, and I thought they were profoundly beautiful.

My friend Tim who had not done LSD for many years, responded to this sudden 5 hit dose by going into a state of complete dissociation. He lay down on the forest floor with glassy eyes, muttering “It is TOO beautiful! It is TOO beautiful!” and he did not respond to me, even when I stared him straight in the face. He reported afterwards that he found himself in a giant Gothic cathedral with the most extravagantly elaborate and brightly painted ornamental decorations all around him. This too can be seen as an extreme form of the regularization discussed above. Under the influence of this powerful dose, Tim’s visual brain could no longer keep up with the massive irregularity of the forest around him, and therefore presented the forest in simplified or abbreviated form, as the interior of a Gothic cathedral. It captures the large geometry of a ground plane that supports an array of vertical columns, each of which fans out high overhead to link up into an over-arching canopy of branches. The only difference is that in the Gothic cathedral the trees are in a regular geometrical array, and each one is a masterpiece of compound symmetry, composed of smaller pillars of different diameters in perfectly symmetrical arrangements, and studded with periodic patterns of ribs, ridges, or knobby protruberances as a kind of celebration of symmetry and periodicity for their own sake. There is a kind of geometrical logic expressed in the ornamental design. If part of the cathedral were lost or destroyed, the pattern could be easily restored by following the same logic as the rest of the design. In information-theoretic terms, the Gothic cathedral has lots of redundancy, its pattern could be expressed in a very much simpler compressed geometrical code. In Tim’s drug-addled brain his visual system could only muster a simple code to represent the world around him, and that is why Tim saw the forest as a Gothic cathedral. Under normal conditions, the additional information of irregularity, or how each tree and branch breaks from the strict regularity of the cathedral model of it, creates the irregular world of experience that we normally see around us. This suggests that the beautiful shapes of ornamental art are not the product of the highest human faculty, as is commonly supposed, but rather, ornamental art offers a window onto the workings of a simpler visual system, whose image of the world is distorted by artifacts of the representational scheme used in the brain. The Gothic cathedral gives a hint as to how the world might appear to a simpler creature, a lizard, or a snake, to whom the world appears more regular than it does to us, because its full irregularity is too expensive to encode exhaustively in all its chaotic details. Of course the flip-side of this rumination is that the world that we humans experience, even in the stone-cold sober state, is itself immeasurably simpler, more regular and geometric, that the real world itself, of which our experience is an imperfect replica. In the words of William Blake, “If the doors of perception were cleansed, everything would appear to man as it is, infinite.”

Mittersill

While I was a PhD student at Boston University, my parents owned a beautiful ski lodge house in the picturesque town of Mittersill in the mountains of New Hampshire, and on spring breaks or long week-ends I would invite my friends, the other PhD candidates, up to Mittersill where we would take long hikes up the mountain, and spend evenings by the fireplace. I introduced a small circle of my friends to the illuminating experience of LSD, in the hopes of sharing some of my perceptual discoveries with them, and perhaps inducing them to learn to use the experience to make discoveries of their own. Eventually Mittersill became associated in our minds with these group trips with an ever-shrinking circle of true diehard psychonauts, making our regular pilgrimage up the mountain in search of Truth and to touch the face of God. We always brought a goodly supply of Happy T’Baccy, which provides a beautiful complement and bemellowment to the otherwise sometimes sharp and jangly LSD experience. Our pattern was usually to arrive on a Friday night, cook up a great feast, and spend an evening by the fire, drinking beer and/or wine and passing the pipe around until everyone felt properly toasted. The talk was often about the workings of mind and brain, since we were all students of cognitive and neural systems. We were all adept computer programmers and well versed in mathematics as part of our PhD studies, so we all understood the issues of mental computation and representation, and I found the conversations about the computational principles of the mind, to be most interesting and intellectually stimulating. This was the high point of my academic career, this is why people want to be scientists. The next morning we would rise early, and after a hearty breakfast, we would all set off up the mountain, which was a steep brisk climb of two or three hours. About half way up the mountain, at a carefully pre-planned time, we would stop, and each “dose up” with our individually chosen dose of LSD for the occasion, timed to reach the peak of the experience about the time we reached the peak of the mountain. Then we would continue our climb through the rich lush mountain forests of New Hampshire to the top of Maida Vale, the sub-peak next to Canon Mountain, from whence a stupendous view opened up across to Canon Mountain and the vast valley below. We would settle ourselves comfortably at some location off the beaten track, and spend the best hours of the day just dreaming crazy thoughts and drinking in the experience

By now I had perfected my introspective techniques to the point that I could voluntarily relax my mind into a state of total disembodiment. The visual world began to fragment, first into two large pieces as binocular fusion was broken, then into a few smaller fragments, and eventually into a miriad of separate fragments of consciousness, like the miriad reflections from a shattered mirror. I was fascinated by this state of consciousness, and how different it was from normal consensual reality. Most alarming or significant was the total absence of my body from its normal place at the center of my world. As the world began to fragment, my body would fragment along with it, disconnected pieces of my body seeming to exist independently, one part here, another over there, but in separate spaces that did not exist in a distinct spatial relation to each other, but as if in different universes, like reflections from different shards of a shattered mirror. And as the visual world attained total fragmentation, all evidence of my body completely vanished, and I lived the experience of a disembodied spirit, pure raw experience, just sensations of color, form, and light. I felt safe and secure in this environment among friends, so I did not mind the total vulnerability afforded by a complete functional shutdown of my mind in this manner. Besides, I had learned that I could snap back together again to a relatively normal consciousness at will, simply by getting up and looking around, and interacting with the world. I was endlessly fascinated by the state of complete disembodiment, and one feature of it impressed itself on me again and again, the geometric regularity of it all. There was a powerful tendency for everything to reduce to ornamental patterns, geometrical arrangements of three-dimensional shapes, like so many glistening gems in a jewelry store, with rich periodic and symmetrical patterns in deep vibrant colors. The deeper I plunged into the experience, the simpler and more powerfully emotive those patterns became. And since my body had totally vanished, these patterns were no longer patterns I viewed out in the world, but rather, the patterns were me, I had become the spatial patterns which made up my experience. I began to see that symmetry and periodicity were somehow primal to experience.

I remember lying on my back and watching the clouds in the sky overhead. Weather patterns are often chaotic at the tops of mountains, and on more than one occasion we were located at a spot where the clouds that formed on the windward side of the mountain were just cresting the summit, where they would dissove in a continuous process of chaotic fragmentation, a veritable Niagra Falls of nebular dissolution, evocative of the fragmentation of my psychedelic experience. The shattered shreds of cloud, viewed from this close up, were about the most ragged and irregular shapes you could imagine, and yet under the influence of the drug I kept seeing fleeting geometrical patterns in them. There were great circular pinwheels and arabesques, patterns like those carved in the doors of Gothic cathedrals, but each flashing in and out of brief existence so quickly that it would be impossible to draw them. I began to realize that the human mind is one great symmetry engine, that the mind makes sense of the world it sees by way of the symmetries that it finds in it. Symmetry is the glue that binds the fragments of experience into coherent wholes.

Figure 2.3 shows a series of paintings by artist Louis Wain, that I find very evocative of the LSD experience. Wain suffered a progressive psychosis that manifested itself in his art, which was originally quite realistic, becomming progressively more abstract and ornamental, in the manner I observed in the various stages or levels of my LSD dissociation. Figure 2.3 A shows a fairly realistic depiction of a cat, but there are curious artifacts in the textured background, a mere hint of periodicity breaking out. I would see such artifacts everywhere, almost invisible, fleeting, and faint, reminiscent of the ornamental pinstripe patterns painted on trucks and motorcycles, a kind of eddy in the stream of visual consciousness as it flows around visual features in the scene. As I descended into the fully dissociated states, the patterns would become more like Figure 2.3 B, C, and D, breathtakingly ornate, with many levels of compound symmetry, revealing the eigenfunctions of perceptual representation, the code by which visual form is represented in the brain.

At times we would break free from our individual reveries, and share absurd nonsensical conversations about our observations. One time, looking down at the vast valley stretching out below us, a vista that seemed to stretch out to distances beyond comprehension, my old friend Peter said that it was hard to tell whether all that scenery was just “way out there”, or was it “way WAY out there?” Of course we both laughed heartily at the absurdity of his statement, but I knew exactly what he meant. When viewing such a grand vista under normal consciousness, one is deeply impressed by the vastness of the view.

lehar_2_3.png

But under the influence of the drug, the vista somehow did not look quite as large as we “knew” that it really was. What Peter was saying was that for some strange reason, the world had shrunken back in on us, and that magnificently vast valley had shrunken to something like a scale model, or a diorama, where it is easy to see how vast the modeled valley is supposed to be, but the model itself appears very much smaller than the valley it attempts to portray. What Peter was observing was the same thing I had observed, and that was beginning to even become familiar, that the world of our experience is not a great open vastness of infinite space, but like the domed vault of the night sky, our experience is bounded by, and contained within, a vast but finite spherical shell, and under the influence of psychedelic drugs that shell seemed to shrink to smaller dimensions, our consciousness was closing in on its egocentric center. Many years later after giving it considerable thought, I built the diorama shown in Figure 2.4 to depict the geometry of visual experience as I observed it under LSD.

lehar_2_4

And when I was in the completely disembodied state, my consciousness closed in even smaller and tighter, the range of my experience was all contained within a rather modest sized space, like a glass showcase in a jewelry store, and the complexity of the patterns in that space was also reduced, from the unfathomably complex and chaotic fractal forms in a typical natural scene, to a much simpler but powerfully beautiful glistening ornamental world of the degree of complexity seen in a Gothic cathedral. The profound significance of these observations dawned on me incrementally every time we had these experiences. I can recall fragmentary pieces of insights gleaned through the confusion of our passage down the mountain, stopping to sit and think wherever and whenever the spirit took us. At one point three of us stopped by a babbling brook that was crashing and burbling through the rocks down the steep mountain slope. We sat in silent contemplation at this primal “white noise” sound, when Lonce commented that if you listen, you can hear a million different sounds hidden in that noise. And sure enough when I listened, I heard laughing voices and honking car horns and shrieking crashes and jangly music and every other possible sound, all at the same time superimposed on each other in a chaotic jumbled mass. It was the auditory equivalent of what we were seeing visually, the mind was latching onto the raw sensory experience not so much to view it as it really is, but to conjure up random patterns from deep within our sensory memory and to match those images to the current sensory input. And now I could see the more general concept. We experience the world by way of these images conjured up in our minds. I came to realize why the LSD experience was enjoyed best in outdoors natural settings, and that is because the chaos of a natural scene, with its innumerable twigs and leaves and stalks, acts as a kind of “white noise” stimulus, like the babbling brook, a stimulus that contains within it every possible pattern, and that frees our mind to interpret that noise as anything it pleases.

On one occasion, on arrival back down at the lodge, our minds were still reeling, and we were not yet ready to leave the magnificence of the natural landscape for the relatively tame and controlled environment indoors, so Andy and I stopped in the woods behind the house and just stood there, like deer in the headlights, drinking in the experience. It was a particularly dark green and leafy environment in the shadow behind the house, with shrubs and leaves at every level, around our ankles, our knees, our shoulders, and all the way up to a leafy canopy high overhead, and at every depth and distance from inches away to the farthest visible depths of the deep green woods. The visual chaos was total and complete, the world already fragmented into millions and millions of apparently disconnected features and facets uniformly in all directions, that it hardly required LSD to appreciate the richness of this chaotic experience. But under LSD, and with the two of us standing stock still for many long minutes of total silence, we both descended into a mental fragmentation as crazy as the fragmented world around us. My body disappeared from my experience, and I felt like I became the forest; the forest and all its visual chaos was me, which in a very real sense it actually was. And in that eternal timeless moment, wrapped in intense but wordless thought, I recognized something very very ancient and primal in my experience. I felt like I was sharing the experience of some primal creature in an ancient swamp many millions of years ago, when nature was first forging its earliest models of mind from the tissue of brain. I saw the world with the same intense attentive concentration, bewilderment, and total absence of human cogntive understanding, as that antediluvian cretaceous lizard must have experienced long ago and far away. The beautiful geometrical and symmetrical forms that condensed spontaneously in my visual experience were like the first glimmerings of understanding emerging in a primitive visual brain. This is why I do psychedelic drugs, to connect more intimately with my animal origins, to celebrate the magnificent mental mechanisms that we inherit from the earliest animal pioneers of mind.

One time after we had descended from the mountain and were sitting around the lodge drinking and smoking in a happy state of befuddlement, a peculiar phenomenon manifested itself that made a deep impression on me. It was getting close to supper time and somebody expressed something to that effect. But our minds were so befuddled by the intoxication that we could only speak in broken sentences, as we inevitably forgot what we wanted to say just as we started saying it, instantly confused by our own initial words. So the first person must have said something like “I’m getting hungry. Do you think…” and then tailed off in confusion. But somebody else would immediately sense the direction that thought was going, and would instinctively attempt to complete the sentence with something like “…we otta go get…” before himself becoming confused, at which a third person might interject “…something to eat?” It does not sound so remarkable here in the retelling, but what erupted before our eyes was an extraordinarily fluid and coherent session of what we later referred to as group thought, where the conversation bounced easily from one person to the next, each person contributing only a fragmentary thought, but nobody having any clear idea of what the whole thought was supposed to be, or how it was going to end. What was amazing about the experience was the coherence and purposefulness of the emergent thought, how it seemed to have a mind of its own independent of our individual minds, even though of course it was nothing other than the collective action of our befuddled minds. It was fascinating to see this thought, like a disembodied spirit, pick up and move our bodies and hands in concerted action, one person getting wood for the fire, another getting out a frying pan, a third going for potatoes, or to open a bottle of wine, none of it planned by any one person, and yet each person chipped in just as and when they thought would be appropriate, as the supper apparently “made itself” using us as its willing accomplices. It was reminiscent of the operational principle behind a ouija board, where people sitting in a circle around a table, all rest an index finger on some movable pointer on a circular alphabet board, and the pointer begins to spell out some message under the collective action of all those fingers. At first the emergent message appears random, but after the first few letters have been spelled out, the participants start to guess at each next letter, and without anyone being overtly aware of it, the word appears to “spell itself” as if under the influence of a supernatural force. As with the ouija board, none of us participating in the group thought experience could hold a coherent thought in their head, and yet coherent thoughts emerged nevertheless, to the bewilderment of us all. And later I observed the same phenomenon with different LSD parties. I have subsequently encountered people well versed in the psychedelic experience who claim with great certainty to have experienced mental telepathy in the form of wordless communication and sharing of thoughts. But for us hard-nosed scientific types, the natural explanation for this apparently supernatural experience is just as wonderous and noteworthy, because it offers a hint as to how the individual parts of a mind act together in concert to produce a unified coherent pattern of behavior that is greater than the mere sum of its constituent parts. The principle of group thought occurs across our individual brains in normal sober consciousness as we instinctively read each other’s faces and follow each other’s thoughts, and it is seen also whenever people are moving a heavy piece of furniture, all lifting and moving in unison in a coherent motion towards some goal. But the psychedelic experience highlighted this aspect of wordless communication and brought it to my attention in clearer, sharper focus.

As the evening tailed on and the drug’s effect gradually subsided in a long slow steady decline, we would sit by the fire and pass a pipe or joint around, and share our observations and experiences of the day. At one point Lonce, who had just taken a puff of a joint, breathed out and held it contemplatively for a while, before taking a second puff and passing it on to the next person in the circle. I objected to this behavior, and accused Lonce of “Bogarting” the joint – smoking it all by himself without passing it along. Lonce responded to this with an explanation that where he comes from, people don’t puff and pass in haste, but every man has the right to a few moments of quiet contemplation and a second puff before passing it along. That was, he explained, the civilized way of sharing a joint. So we immediately adopted Lonce’s suggestion, and this method of sharing a joint has henceforth and forever since been referred to by us as the “Lonce Method”.

Theoretical Implications

As I have explained, the purpose of all this psychonautical exploration was not merely for our own entertainment, although entertaining it was, and to the highest degree. No, the primary purpose of these psychonautical exploits was clear all along at least in my mind, and that was to investigate the theoretical implications of these experiences to theories of mind and brain. And my investigations were actually beginning to bear fruit in two completely separate directions, each of which had profound theoretical implications. At that time I was studying neural network theories of the brain, or how the brain makes sense of the visual world. A principal focus of our investigation was the phenomena of visual illusions, like the Kanizsa figure shown in Figure 2.5 A. It is clear that what is happening here is that the visual mind is creating illusory contours that link up the fragmentary contours suggestive of the illusory triangle. In our studies we learned of Stephen Grossberg’s neural network model of this phenomenon. Grossberg proposed that the visual brain is equipped with oriented edge detector neurons that fire whenever a visual edge passes through their local receptive field. These neurons would be triggered by the stark black / white contrast edges of the stimulus in Figure 2.5 A. A higher level set of neurons would then detect the global pattern of collinearity, and sketch in the illusory contour by a process of collinear completion. These higher level “cooperative cells” were equipped with much larger elongated receptive fields, long enough to span the gap in the Kanizsa figure, and the activation of these higher level neurons in turn stimulated lower level local edge detector neurons located along the illusory contour, and that activation promoted the experience of an illusory contour where there is none in the stimulus

lehar_2_5

I believed I was seeing these illusory contours in my LSD experience, as suggested by all the curvy lines in my sketch in Figure 2.2 above. But I was not only seeing the contours in illusory figures, I was seeing “illusory” contours just about everywhere across the visual field. But curiously, these contours were not “visible” in the usual sense, but rather, they are experienced in an “invisible” manner as something you know is there, but you cannot actually see. However I also noticed that these contours did have an influence on the visible portions of the scene. I have mentioned how under LSD the visual world tends to “breathe”, to waver and wobble like a slow-motion movie of the bottom of a swimming pool viewed through its surface waves. In fact, the effect of the “invisible” contours was very much like the effect of the invisible waves on the surface of the pool, which can also be seen only by their effects on the scene viewed through them. You cannot see the waves themselves, all you can see is the wavering of the world caused by those waves. Well I was observing a very similar phenomenon in my LSD experience. I devised a three-dimensional Kanizsa figure, shown in Figure 2.5 B, and observed that even in the stone-cold sober state, I could see a kind of warp or wobble of the visual background behind the illusory contour caused by the figure, especially if the figure is waved back and forth gently against a noisy or chaotic background. So far, my LSD experiences were consistent with our theoretical understanding of the visual process, confirming to myself by direct observation an aspect of the neural network model we were currently studying in school.

But there was one aspect of the LSD experience that had me truly baffled, and that was the fantastic symmetries and periodicities that were so characteristic of the experience. What kind of neural network model could possibly account for that? It was an issue that I grappled with for many months that stretched into years. In relation to Grossberg’s neural network, it seemed that the issue concerned the question of what happens at corners and vertices where contours meet or cross. A model based on collinearity alone would be stumped at image vertices. And yet a straightforward extension of Grossberg’s neural network theory to address image vertices leads to a combinatorial explosion.The obvious extension, initially proposed by Grossberg himself, was to posit specialized “cooperative cells” with receptive fields configured to detect and enhance other configurations of edges besides ones that are collinear. But the problem is that you would need so many different specialized cells to recognize and complete every possible type of vertex, such as T and V and X and Y vertices, where two or more edges meet at a point, and each of these vertex types would have to be replicated at every orientation, and at every location across the whole visual field! It just seemed like a brute-force solution that was totally implausible.

Then one day after agonizing for months on this issue, my LSD observations of periodic and symmetrical patterns suddenly triggered a novel inspiration. Maybe the nervous system does not require specialized hard-wired receptive fields to accomodate every type of vertex, replicated at every orientation at every spatial location. Maybe the nervous system uses something much more dynamic and adaptive and flexible. Maybe it uses circular standing waves to represent different vertex types, where the standing wave can bend and warp to match the visual input, and standing waves would explain all that symmetry and periodicity so clearly evident in the LSD experience as little rotational standing waves that emerge spontaneously at image vertices, and adapt to the configuration of those vertices. Thanks to illegal psychotropic substances, I had stumbled on a staggeringly significant new theory of the brain, a theory which, if proven right, would turn the world of neuroscience on its head! My heart raced and pounded at the implications of what I had discovered. And this theory became the prime focus of my PhD thesis (Lehar 1994), in which I did computer simulations of my harmonic resonance model that replicated certain visual illusions in a way that no other model could. I had accomplished the impossible. I had found an actual practical use and purpose for what was becoming my favorite pass-time, psychedelic drugs! It was a moment of glory for an intrepid psychonaut, a turning point in my life. Figure 2.6 shows a page from my notebook dated October 6 1992, the first mention of my new theory of harmonic resonance in the brain.

lehar_2_6.png


Compare the above descriptions of point-of-view fragmentation, visual coherence, and symmetry as experienced on LSD, with our very own account of symmetrical pattern completion during psychedelic experiences as presented in Algorithmic Reduction of Psychedelic States (slightly edited for clarity):

Lower Symmetry Detection and Propagation Thresholds

Finally, this is perhaps the most interesting and ethically salient effect of psychedelics. The first three effects (tracers, drifting, and pattern recognition) are not particularly difficult to square with standard neuroscience. This fourth effect, while not incompatible with connectionist accounts, does suggest a series of research questions that may hint at an entirely new paradigm for understanding consciousness.

We have not seen anyone in the literature specifically identify this effect in all of its generality. The lowering of the symmetry detection threshold really has to be experienced to be believed. We claim that this effect manifests in all psychedelic experiences to a greater or lesser extent, and that many effects can in fact be explained by simply applying this effect iteratively.

Psychedelics make it easier to find similarities between any two given phenomenal objects. When applied to perception, this effect can be described as a lowering of the symmetry detection threshold. This effect is extremely general and symmetry should not be taken to refer exclusively to geometric symmetry.

How symmetries manifest depends on the set and setting. Researchers interested in verifying and exploring the quantitative and subjective properties of this effect will probably have to focus first on a narrow domain; the effect happens in all experiential modalities.

For now, let us focus on the case of visual experience. In this domain, the effect is what PsychonautWiki calls Symmetrical Texture Repetition:

Quantifying Bliss (35)

Credit: Chelsea Morgan from PsychonautWiki and r/replications

Symmetry detection during psychedelic experiences requires that one’s attention interprets a given element in the scene as a symmetry element. Symmetry elements are geometrical points of reference about which symmetry operations can take place (such as axes of rotation, mirror planes, hyperplanes, etc.). In turn, a collection of symmetry elements defines a symmetry structure in the following way: A symmetry structure is a set of n-dimensional symmetry elements for which the qualities of the experience surrounding each element obeys the symmetry constraints imposed by all the elements considered together.

Psychedelic symmetry detection can be (and typically is) recursively applied to previously constructed symmetry structures. At a given time multiple independent symmetry structures can coexist inside an experience. By guiding one’s attention one can make these structures interact and ultimately merge. Formally, each symmetry structure is capable of establishing a merging relationship with another symmetry structure. This is achieved by simultaneously focusing one’s attention on both. These relationships are fleeting, but they influence the evolution of the relative position of each symmetry element. When two symmetry structures are in a merging relationship, it is possible to rearrange them (with the aid of drifting and pattern recognition) to create a symmetrical structure that incorporates the symmetry elements of both substructures at once. To do so, one’s mind can either detect one (or several) more symmetry elements along which the previously-existing symmetry elements are made to conform, or, alternatively, if the two pre-existing symmetry structures share a symmetry element (e.g. an axis of rotation of order 3), these corresponding identical symmetry elements can fuse and become a bridge that merges both structures.

Surprisingly, valence seems to be related to psychedelic symmetry detection. As one constructs symmetry structures, one becomes aware of an odd and irresistible subjective pull towards building even higher levels of symmetry. In other words, every time the structure of one’s experience is simplified by identifying a new symmetry element in the scene, one’s whole experience seems to snap into a new (simplified) mode, and this comes with a positive feeling. This feeling can take many forms: it may feel blissful, interesting, beautiful, mind-expanding, and/or awe-producing, all depending on the specific structures that one is merging. Conversely when two symmetry structures are such that merging them is either tricky of impossible, this leads to low valence: frustration, anxiety, pain and an odd feeling of being stuck between two mutually unintelligible worlds. We hypothesize that this is the result of dissonance between the incompatible symmetry structures.

If one meditates in a sensorially-minimized room during a psychedelic experience while being aware that one’s symmetry detection threshold has been lowered by the substance, one can recursively re-apply this effect to produce all kinds of complex mathematical structures that incorporate complex symmetry element interactions. In other words, with the aid of concentration one can climb the symmetry gradient (i.e. increase the total number of symmetry elements) up to the point where the degrees of freedom afforded by the symmetry structure limit any further element from being incorporated into it. We will call these experiences peak symmetry states.

Future research should explore and compare the various states of consciousness that exhibit peak symmetry. There is very likely an enormous number of peak symmetry states, some of which are fairly suboptimal and others that cannot be improved upon. If there is a very deep connection between valence, symmetry, information and harmony, it would very likely show in this area. Indeed, we hypothesize that the highest levels of valence that can be consciously experienced involve peak symmetry states. Anecdotally, this connection has already been verified, with numerous trip reports of people achieving states of unimaginable bliss by inhabiting peak symmetry states (often described as fractal mandala-like mirror rooms).

The range of peak symmetry states include fractals, tessellations, graphs, and higher dimensional projections. Which one of these states contains the highest degree of inter-connectivity? And if psychedelic symmetry is indeed related to conscious bliss, which experience of symmetry is human peak bliss?

The pictures above all illustrate possible peak symmetry states one can achieve by combining psychedelics and meditation. The pictures illustrate only the core structure of symmetries that are present in these states of consciousness. What is being reflected is the very raw “feels” of each patch of your experiential field. Thus these pictures really miss the actual raw feelings of the whole experience. They do show, however, a rough outline of symmetrical relationships possible in one of these experiences.

Since control interruption is also co-occurrent with the psychedelic symmetry effect, previously-detected symmetries tend to linger for long periods of time. For this reason, the kinds of symmetries one can detect at a given point in time is a function of the symmetries that are currently being highlighted. And thanks to drifting and pattern recognition enhancement, there is some wiggle room for your mind to re-arrange the location of the symmetries experienced. The four effects together enable, at times, a smooth iterative integration of so many symmetries that one’s consciousness becomes symmetrically interconnected to an unbelievable degree.

What may innocently start as a simple two-sided mirror symmetry can end up producing complex arrangements of self-reflecting mirrors showing glimpses of higher and higher dimensional symmetries. Studying the mathematical properties of the allowed symmetries is a research project that has only just begun. I hope one day dedicated mathematicians describe in full the class of possible high-order symmetries that humans can experience in these states of consciousness.

Anecdotally, each of the 17 possible wallpaper symmetry groups can be instantiated with this effect. In other words, psychedelic states lower the symmetry detection threshold for all of the mathematically available symmetrical tessellations.

wade_symmetry_best_blank_2

All of the 17 2-dimensional wallpaper groups can be experienced with symmetry planes detected, amplified and re-arranged during a psychedelic experience.

Revising the symmetrical texture repetition of grass shown above, we can now discover that the picture displays the wallpaper symmetry found in the lower left circle above:

grass_symmetries

In very high doses, the symmetry completion is so strong that at any point one risks confusing left and right, and thus losing grasp of one’s orientation in space and time. Depersonalization is, at times, the result of the information that is lost when there is intense symmetry completion going on. One’s self-models become symmetrical too quickly, and one finds it hard to articulate a grounded point of view.


In Preaceful Qualia: The Manhattan Project of Consciousness we explored possible information-processing applications for climbing the symmetry gradient as described above:

LSD-like states allow the global binding of otherwise incompatible schemas by softening the degree to which neighborhood constraints are enforced. The entire experience becomes a sort of chaotic superposition of locally bound islands that can, each in its own way, tell sensory-linguistic stories in parallel about the unique origin and contribution of their corresponding gestalts to the narrative of the self.

This phenomenon forces, as it were, the onset of cognitive dissonance between incompatible schemas that would otherwise evade mutual contact. On the bright side, it also allows mutual resonance between parts that agree with each other. The global inconsistencies are explored and minimized. One’s mind can become a glorious consensus.

squarespiral2

Each square represents, and carries with it, the information of a previously experienced cognitive gestalt (situational memories, ideas, convictions, etc.). Some gestalts never come up together naturally. The LSD-like state allows their side-by-side comparison.

In therapy, LSD-like states had been used for many decades in order to integrate disparate parts of one’s personality into a (more) coherent and integrated lifeworld. But scientists at the beginning didn’t know why this worked.

The Turing module then discovered that the kaleidoscopic world of acid can be compared to raising the temperature within an Ising model. If different gestalts imply a variety of semantic-affective constraints, kaleidoscopic Frame Stacking has the formal effect of expanding the region of one’s mind that is taken into consideration for global consistency at any given point in time. The local constraints become more loose, giving global constraints the upper hand. The degree of psychedelia is approximately proportional to the temperature of the model, and when you let it cool, the grand pattern is somewhat different. It is more stable; one arrives at a more globally consistent state. Your semantic-affective constraints are, on the whole, better satisfied. The Turings called this phenomenon qualia annealing.

coarsening_early_small

Ising Model – A simple computational analogy for the LSD-induced global constraint satisfaction facilitation.


Another key reference to look at within this theme is the discussion of non-Euclidean symmetry in the article titled The Hyperbolic Geometry of DMT Experiences: Symmetries, Sheets, and Saddled Scenes… here we jump in medias res to the description of the 2nd and 3rd plateau of DMT intoxication:

(2) The Chrysanthemum

If one ups the dose a little bit and lands somewhere in the range between 4 to 8 mg, one is likely to experience what Terrence McKenna called “the Chrysanthemum”. This usually manifests as a surface saturated with a sort of textured fabric composed of intricate symmetrical relationships, bright colors, shifting edges and shimmering pulsing superposition patterns of harmonic linear waves of many different frequencies.

Depending on the dose consumed one may experience either one or several semi-parallel channels. Whereas a threshold dose usually presents you with a single strong vibe (or ambiance), the Chrysanthemum level often has several competing vibes each bidding for your attention. Here are some examples of what the visual component of this state of consciousness may look like.

The visual component of the Chrysanthemum is often described as “the best screen saver ever“, and if you happen to experience it in a good mood you will almost certainly agree with that description, as it is usually extremely harmonious, symmetric and beautiful in uncountable ways. No external input can possibly replicate the information density and intricate symmetry of this state; such state has to be endogenously generated as a a sort of harmonic attractor of your brain dynamics.

You can find many replications of Chrysanthemum-level DMT experiences on the internet, and I encourage you to examine their implicit symmetries (this replication is one of my all-times favorite).

In Algorithmic Reduction of Psychedelic States we posited that any one of the 17 wallpaper symmetry groups can be instantiated as the symmetries that govern psychedelic visuals. Unfortunately, unlike the generally slow evolution of usual psychedelic visuals, DMT’s vibrational frequency forces such visuals to evolve at a speed that makes it difficult for most people to spot the implicit symmetry elements that give rise to the overall mathematical structure underneath one’s experience. For this reason it has been difficult to verify that all 17 wallpaper groups are possible in DMT states. Fortunately we were recently able to confirm that this is in fact the case thanks to someone who trained himself to do just this. I.e. detecting symmetry elements in patterns at an outstanding speed.

An anonymous psychonaut (whom we will call researcher A) sent a series of trip report to Qualia Computing detailing the mathematical properties of psychedelic visuals under various substances and dose regimens. A is an experienced psychonaut and a math enthusiast who recently trained himself to recognize (and name) the mathematical properties of symmetrical patterns (such as in works of art or biological organisms). In particular, he has become fluent at naming the symmetries exhibited by psychedelic visuals. In the context of 2D visuals on surfaces, A confirms that the symmetrical textures that arise in psychedelic states can exhibit any one of the 17 wallpaper symmetry groups. Likewise, he has been able to confirm that every possible spherical symmetry group can also be instantiated in one’s mind as a resonant attractor on these states.

The images below show some examples of the visuals that A has experienced on 2C-B, LSD, 4-HO-MET and DMT (sources: top lefttop middle, the rest were made with this service):

The Chrysanthemum level interacts with sensory input in an interesting way: the texture of anything one looks at quickly becomes saturated with nested 2-dimensional symmetry groups. If you took enough DMT to take you to this level and you keep your eyes open and look at a patterned surface (i.e. statistical texture), it will symmetrify beyond recognition. A explains that at this level DMT visuals share some qualities with those of, say, LSD, mescaline, and psilocin. Like other psychedelics, DMT’s Chrysanthemum level can instantiate any 2-dimensional symmetry, yet there are important differences from other psychedelics at this dose range. These include the consistent change in ambiance (already present in threshold doses), the complexity and consistency of the symmetrical relationships (much more dense and whole-experience-consistent than is usually possible with other psychedelics), and the speed (with a control-interruption frequency reaching up to 30 hertz, compared to 10-20 hertz for most psychedelics). Thus, people tend to point out that DMT visuals (at this level) are “faster, smaller, more detailed and more globally consistent” than on comparable levels of alteration from similar agents.

Now, if you take a dose that is a little higher (in the ballpark of 8 to 12 mg), the Chrysanthemum will start doing something new and interesting…

(3) The Magic Eye Level

A great way to understand the Magic Eye level of DMT effects is to think of the Chrysanthemum as the texture of an autostereogram (colloquially described as “Magic Eye” pictures). Our visual experience can be easily decomposed into two points-of-view (corresponding to the feed coming from each eye) that share information in order to solve the depth-map problem in vision. This is to map each visual qualia to a space with relative distances so (a) the input is explained and (b) you get recognizable every-day objects represented as implicit shapes beneath the depth-map. You can think of this process as a sort of hand-shake between bottom-up perception and top-down modeling.

In everyday conditions one solves the depth-map problem within a second of opening one’s eyes (minus minor details that are added as one looks around). But on DMT, the “low-level perceptions” looks like a breathing Chrysanthemum, which means that the top-down modeling has that “constantly shifting” stuff to play with. What to make of it? Anything you can think of.

There are three major components of variance on the DMT Magic Eye level:

  1. Texture (dependent on the Chrysanthemum’s evolution)
  2. World-sheet (non-occluduing 3D1T depth maps)
  3. Extremelly lowered information copying threshold.

The image on the left is a lobster, the one on the center is a cone and the one to the right contains furniture (a lamp, a chair and a table). Notice that what you see is a sort of depth-map which encodes shapes. We will call this depth-map together with the appearance of movement and acceleration represented in it, a world-sheet.

World-Sheets

The world-sheet encodes the “semantic content” of the scene and is capable of representing arbitrary situations (including information about what you are seeing, where you are, what the entities there are doing, what is happening, etc.).

It is common to experience scenes from usually mundane-looking places like ice-cream stores, play pens, household situations, furniture rooms, apparel, etc.. Likewise, one frequently sees entities in these places, but they rarely seem to mind you because their world is fairly self-contained. As if seeing through a window. People often report that the worlds they saw on a DMT trip were all “made of the same thing”. This can be interpreted as the texture becoming the surfaces of the world-sheet, so that the surfaces of the tables, chairs, ice-cream cones, the bodies of the people, and so on are all patterned with the same texture (just as in actual autostereograms). This texture is indeed the Chrysanthemum completely contorted to accommodate all the curvature of the scene.

Magic Eye level scenes often include 3D geometrical shapes like spheres, cones, cylinders, cubes, etc. The complexity of the scene is roughly dose-dependent. As one ups the highness (but still remaining within the Magic Eye level) complex translucid qualia crystals in three dimensions start to become a possibility.

Whatever phenomenal objects you experience on this level that lives for more than a millisecond needs to have effective strategies for surviving in an ecosystem of other objects adapted to that level. Given the extremely lowered information copying threshold, whatever is good at making copies of itself will begin to tesselate, mutate and evolve, stealing as much of your attention as possible in the way. Cyclic transitions occupy one’s attention: objects quickly become scenes which quickly become gestalts from which a new texture evolves in which new objects are detected and so on ad infinitum.

katoite-hydrogarnet

A reports that at this dose range one can experience at least some of the 230 space groups as objects represented in the world-sheet. For example, A reports having stabilized a structure with a Pm-3m symmetry structure, not unlike the structure of ZIF-71-RHO. Visualizing such complex 3D symmetries, however, does seem to require previous training and high levels of mental concentration (i.e. in order to ensure that all the symmetry elements are indeed what they are supposed to be).

There is so much qualia laying around, though, at times not even your normal space can contain it all. Any regular or semi regular symmetrical structure you construct by centering your attention prone to overflow if you focus too much on it. What does this mean? If you focus too much on, for example, the number 6, your mind might represent the various ways in which you can arrange six balls in a perfectly symmetrical way. Worlds made of hexagons and octahedrons interlocked in complex but symmetrical ways may begin to tesselate your experiential field. With every second you find more and more ways of representing the number six in interesting, satisfying, metaphorically-sound synesthetic ways (cf. Thinking in Numbers). Now, what happens if you try to represent the number seven in a symmetric way on the plane? Well, the problem is that you will have too many heptagons to fit in Euclidean space (cf. Too Many Triangles). Thus the resulting symmetrical patterns will seem to overflow the plane (which is often felt as a folding and fluid re-arrangement, and when there is no space left in a region it either expands space or it is felt as some sort of synesthetic tension or stress, like a sense of crackling under a lot of pressure).

In particular, A claims that in the lower ranges of the DMT Magic Eye level the texture of the Chrysanthemum tends to exhibit heptagonal and triheptagonal tilings (as shown in the picture above). A explains that at the critical point between the Chrysanthemum and the Magic Eye levels the intensity of the rate of symmetry detection of the Chrysanthemum cannot be contained to a 2D surface. Thus, the surface begins to fold, often in semi-symmetric ways. Every time one “recognizes” an object on this “folding Chrysanthemum” the extra curvature is passed on to this object. As the dose increases, one interprets more and more of this extra curvature and ends up shaping a complex and highly dynamic spatiotemporal depth map with hyperbolic folds. In the upper ranges of the Magic Eye level the world-sheet is so curved that the scenes one visualize are intricate and expansive, feeling at times like one is able to peer through one’s horizon in all directions and see oneself and one’s world from a distance. At some critical point one may feel like the space around one is folding into a huge dome where the walls are made of whatever texture + world-sheet combination happened to win the Darwinian selection pressures applied to the qualia patterns on the Magic Eye level. This concentrated hyperbolic synesthetic texture is what becomes the walls of the Waiting Room…


As suggested by the quotes above, psychedelic symmetries are extremely beautiful. This is puzzling for most worldviews. But once you take into account the Tyranny of the Intentional Object and the Symmetry Theory of Valence, it begins to make sense why peak symmetry on psychedelics is so delightfully amazing (sometimes unimaginably better than a great orgasm or a back-rub on ecstasy). In this vein, we are proud to point out that we have worked out some precise, empirically testable, predictions based on connectome-specific harmonic waves and the symmetry theory of valence (see: Quantifying Bliss).


Interestingly, the process of point-of-view fragmentation and subsequent annealing to global geometric coherence is hinted at by John C. Lilly in his book Programming and Metaprogramming in the Human Biocomputer (you can read the relevant quote here: Psychedelic alignment cascades).


Finally, I would like to draw attention to David Pearce‘s quote about psychedelics: Their Scientific Significance is Hard to Overstate.

As evidenced in Steven Lehar’s writeup (and the other quotes and references provided above), we could say that giving psychedelics to brilliant people with a scientific background in cognitive science and natural philosophical talent does indeed have the ability to expand our evidential base for the nature of consciousness and the way our brains work.

It is thus far more useful for the advancement of the science of consciousness to allocate such experiences to serious scientifically-minded psychonauts than it is to give those same agents to people with pre-scientific frameworks. The phenomenological descriptions and insights provided by a single Steven Lehar on acid are worth a thousand Buddhists, French Existentialists, poets, and film-makers on LSD.

Either way, it is unconscionable that today most leading academics working on the problem of consciousness have no personal experience with these agents, nor they show much interest in the alien state-spaces that they disclose. That’s about as weird as physicists only showing interest in what happens at room-temperature, even though most precise mathematical theories of the physical world can only be tested in extreme conditions (such as high-energy particle collisions). Just as we can expect that a few observations of the behavior of matter in extreme conditions will provide a lot more information than thousands of observations of matter in known “everyday” conditions, the ultimate nature of qualia is most likely to be understood by studying its properties in extreme (e.g. high-energy) neuronal environments.

The Universal Plot: Part I – Consciousness vs. Pure Replicators

“It seems plain and self-evident, yet it needs to be said: the isolated knowledge obtained by a group of specialists in a narrow field has in itself no value whatsoever, but only in its synthesis with all the rest of knowledge and only inasmuch as it really contributes in this synthesis toward answering the demand, ‘Who are we?'”

– Erwin Schrödinger in Science and Humanism (1951)

 

“Should you or not commit suicide? This is a good question. Why go on? And you only go on if the game is worth the candle. Now, the universe is been going on for an incredibly long time. Really, a satisfying theory of the universe should be one that’s worth betting on. That seems to me to be absolutely elementary common sense. If you make a theory of the universe which isn’t worth betting on… why bother? Just commit suicide. But if you want to go on playing the game, you’ve got to have an optimal theory for playing the game. Otherwise there’s no point in it.”

Alan Watts, talking about Camu’s claim that suicide is the most important question (cf. The Most Important Philosophical Question)

In this article we provide a novel framework for ethics which focuses on the perennial battle between wellbeing-oriented consciousness-centric values and valueless patterns who happen to be great at making copies of themselves (aka. Consciousness vs. Pure Replicators). This framework extends and generalizes modern accounts of ethics and intuitive wisdom, making intelligible numerous paradigms that previously lived in entirely different worlds (e.g. incongruous aesthetics and cultures). We place this worldview within a novel scale of ethical development with the following levels: (a) The Battle Between Good and Evil, (b) The Balance Between Good and Evil, (c) Gradients of Wisdom, and finally, the view that we advocate: (d) Consciousness vs. Pure Replicators. More so, we analyze each of these worldviews in light of our philosophical background assumptions and posit that (a), (b), and (c) are, at least in spirit, approximations to (d), except that they are less lucid, more confused, and liable to exploitation by pure replicators. Finally, we provide a mathematical formalization of the problem at hand, and discuss the ways in which different theories of consciousness may affect our calculations. We conclude with a few ideas for how to avoid particularly negative scenarios.

Introduction

Throughout human history, the big picture account of the nature, purpose, and limits of reality has evolved dramatically. All religions, ideologies, scientific paradigms, and even aesthetics have background philosophical assumptions that inform their worldviews. One’s answers to the questions “what exists?” and “what is good?” determine the way in which one evaluates the merit of beings, ideas, states of mind, algorithms, and abstract patterns.

Kuhn’s claim that different scientific paradigms are mutually unintelligible (e.g. consciousness realism vs. reductive eliminativism) can be extended to worldviews in a more general sense. It is unlikely that we’ll be able to convey the Consciousness vs. Pure Replicators paradigm by justifying each of the assumptions used to arrive to it one by one starting from current ways of thinking about reality. This is because these background assumptions support each other and are, individually, not derivable from current worldviews. They need to appear together as a unit to hang together tight. Hence, we now make the jump and show you, without further due, all of the background assumptions we need:

  1. Consciousness Realism
  2. Qualia Formalism
  3. Valence Structuralism
  4. The Pleasure Principle (and its corollary The Tyranny of the Intentional Object)
  5. Physicalism (in the causal sense)
  6. Open Individualism (also compatible with Empty Individualism)
  7. Universal Darwinism

These assumptions have been discussed in previous articles. In the meantime, here is a brief description: (1) is the claim that consciousness is an element of reality rather than simply the improper reification of illusory phenomena, such that your conscious experience right now is as much a factual and determinate aspect of reality as, say, the rest mass of an electron. In turn, (2) qualia formalism is the notion that consciousness is in principle quantifiable. Assumption (3) states that valence (i.e. the pleasure/pain axis, how good an experience feels) depends of the structure of such experience (more formally, on the properties of the mathematical object isomorphic to its phenomenology).

(4) is the assumption that people’s behavior is motivated by the pleasure-pain axis even when they think that’s not the case. For instance, people may explicitly represent the reason for doing things in terms of concrete facts about the circumstance, and the pleasure principle does not deny that such reasons are important. Rather, it merely says that such reasons are motivating because one expects/anticipates less negative valence or more positive valence. The Tyranny of the Intentional Object describes the fact that we attribute changes in our valence to external events and objects, and believe that such events and objects are intrinsically good (e.g. we think “icecream is great” rather than “I feel good when I eat icecream”).

Physicalism (5) in this context refers to the notion that the equations of physics fully describe the causal behavior of reality. In other words, the universe behaves according to physical laws and even consciousness has to abide by this fact.

Open Individualism (6) is the claim that we are all one consciousness, in some sense. Even though it sounds crazy at first, there are rigorous philosophical arguments in favor of this view. Whether this is true or not is, for the purpose of this article, less relevant than the fact that we can experience it as true, which happens to have both practical and ethical implications for how society might evolve.

Finally, (7) Universal Darwinism refers to the claim that natural selection works at every level of organization. The explanatory power of evolution and fitness landscapes generated by selection pressures is not confined to the realm of biology. Rather, it is applicable all the way from the quantum foam to, possibly, an ecosystem of universes.

The power of a given worldview is not only its capacity to explain our observations about the inanimate world and the quality of our experience, but also in its capacity to explain *in its own terms* the reasons for why other worldviews are popular as well. In what follows we will utilize these background assumptions to evaluate other worldviews.

 

The Four Worldviews About Ethics

The following four stages describe a plausible progression of thoughts about ethics and the question “what is valuable?” as one learns more about the universe and philosophy. Despite the similarity of the first three levels to the levels of other scales of moral development (e.g. this, this, this, etc.), we believe that the fourth level is novel, understudied, and very, very important.

1. The “Battle Between Good and Evil” Worldview

“Every distinction wants to become the distinction between good and evil.” – Michael Vassar (source)

Common-sensical notions of essential good and evil are pre-scientific. For reasons too complicated to elaborate on for the time being, the human mind is capable of evoking an agentive sense of ultimate goodness (and of ultimate evil).

maxresdefault

Good vs. Evil? God vs. the Devil?

Children are often taught that there are good people and bad people. That evil beings exist objectively, and that it is righteous to punish them and see them with scorn. On this level people reify anti-social behaviors as sins.

Essentializing good and evil, and tying it up to entities seems to be an early developmental stage of people’s conception of ethics, and many people end up perpetually stuck in here. Several religions (specially the Abrahamic ones) are often practiced in such a way so as to reinforce this worldview. That said, many ideologies take advantage of the fact that a large part of the population is at this level to recruit adherents by redefining “what good and bad is” according to the needs of such ideologies. As a psychological attitude (rather than as a theory of the universe), reactionary and fanatical social movements often rely implicitly on this way of seeing the world, where there are bad people (jews, traitors, infidels, over-eaters, etc.) who are seen as corrupting the soul of society and who deserve to have their fundamental badness exposed and exorcised with punishment in front of everyone else.

15d8a1999197da27374e911b1cba769a--satan-god-is-good

Traditional notions of God vs. the Devil can be interpreted as the personification of positive and negative valence

Implicitly, this view tends to gain psychological strength from the background assumptions of Closed Individualism (which allows you to imagine that people can be essentially bad). Likewise, this view tends to be naïve about the importance of valence in ethics. Good feelings are often interpreted as the result of being aligned with fundamental goodness, rather than as positive states of consciousness that happen to be triggered by a mix of innate and programmable things (including cultural identifications). More so, good feelings that don’t come in response to the preconceived universal order are seen as demonic and aberrant.

From our point of view (the 7 background assumptions above) we interpret this particular worldview as something that we might be biologically predisposed to buy into. Believing in the battle between good and evil was probably evolutionarily adaptive in our ancestral environment, and might reduce many frictional costs that arise from having a more subtle view of reality (e.g. “The cheaper people are to model, the larger the groups that can be modeled well enough to cooperate with them.” – Michale Vassar). Thus, there are often pragmatic reasons to adopt this view, specially when the social environment does not have enough resources to sustain a more sophisticated worldview. Additionally, at an individual level, creating strong boundaries around what is or not permissible can be helpful when one has low levels of impulse control (though it may come at the cost of reduced creativity).

On this level, explicit wireheading (whether done right or not) is perceived as either sinful (defying God’s punishment) or as a sort of treason (disengaging from the world). Whether one feels good or not should be left to the whims of the higher order. On the flipside, based on the pleasure principle it is possible to interpret the desire to be righteous as being motivated by high valence states, and reinforced by social approval, all the while the tyranny of the intentional object cloaks this dynamic.

It’s worth noting that cultural conservativism, low levels of the psychological constructs of Openness to Experience and Tolerance of Ambiguity , and high levels of Need for Closure, all predict getting stuck in this worldview for one’s entire life.

2. The “Balance Between Good and Evil” Worldview

TVTropes has a great summary of the sorts of narratives that express this particular worldview and I highly recommend reading that article to gain insight into the moral attitudes compatible with this view. For example, here are some reasons why Good cannot or should not win:

Good winning includes: the universe becoming boring, society stagnating or collapsing from within in the absence of something to struggle against or giving people a chance to show real nobility and virtue by risking their lives to defend each other. Other times, it’s enforced by depicting ultimate good as repressive (often Lawful Stupid), or by declaring concepts such as free will or ambition as evil. In other words “too much of a good thing”.

Balance Between Good and Evil by tvtropes

Now, the stated reasons why people might buy into this view are rarely their true reasons. Deep down, the Balance Between Good and Evil is adopted because: people want to differentiate themselves from those who believe in (1) to signal intellectual sophistication, they experience learned helplessness after trying to defeat evil without success (often in the form of resilient personal failings or societal flaws), they find the view compelling at an intuitive emotional level (i.e. they have internalized the hedonic treadmill and project it onto the rest of reality).

In all of these cases, though, there is something somewhat paradoxical about holding this view. And that is that people report that coming to terms with the fact that not everything can be good is itself a cause of relief, self-acceptance, and happiness. In other words, holding this belief is often mood-enhancing. One can also confirm the fact that this view is emotionally load-bearing by observing the psychological reaction that such people have to, for example, bringing up the Hedonistic Imperative (which asserts that eliminating suffering without sacrificing anything of value is scientifically possible), indefinite life extension, or the prospect of super-intelligence. Rarely are people at this level intellectually curious about these ideas, and they come up with excuses to avoid looking at the evidence, however compelling it may be.

For example, some people are lucky enough to be born with a predisposition to being hyperthymic (which, contrary to preconceptions, does the opposite of making you a couch potato). People’s hedonic set-point is at least partly genetically determined, and simply avoiding some variants of the SCN9A gene with preimplantation genetic diagnosis would greatly reduce the number of people who needlessly suffer from chronic pain.

But this is not seen with curious eyes by people who hold this or the previous worldview. Why? Partly this is because it would be painful to admit that both oneself and others are stuck in a local maxima of wellbeing and that examining alternatives might yield very positive outcomes (i.e. omission bias). But at its core, this willful ignorance can be explained as a consequence of the fact that people at this level get a lot of positive valence from interpreting present and past suffering in such a way that it becomes tied to their core identity. Pride in having overcome their past sufferings, and personal attachment to their current struggles and anxieties binds them to this worldview.

If it wasn’t clear from the previous paragraph, this worldview often requires a special sort of chronic lack of self-insight. It ultimately relies on a psychological trick. One never sees people who hold this view voluntarily breaking their legs, taking poison, or burning their assets to increase the goodness elsewhere as an act of altruism. Instead, one uses this worldview as a mood-booster, and in practice, it is also susceptible to the same sort of fanaticism as the first one (although somewhat less so). “There can be no light without the dark. And so it is with magic. Myself, I always try to live within the light.” – Horace Slughorn.

315eab27545ea96c67953c54358fe600Additionally, this view helps people rationalize the negative aspects of one’s community and culture. For example, it not uncommon for people to say that buying factory farmed meat is acceptable on the grounds that “some things have to die/suffer for others to live/enjoy life.” Balance Between Good and Evil is a close friend of status quo bias.

Hinduism, Daoism, and quite a few interpretations of Buddhism work best within this framework. Getting closer to God and ultimate reality is not done by abolishing evil, but by embracing the unity of all and fostering a healthy balance between health and sickness.

It’s also worth noting that the balance between good and evil tends to be recursively applied, so that one is not able to “re-define our utility function from ‘optimizing the good’ to optimizing ‘the balance of good and evil’ with a hard-headed evidence-based consequentialist approach.” Indeed, trying to do this is then perceived as yet another incarnation of good (or evil) which needs to also be balanced with its opposite (willful ignorance and fuzzy thinking). One comes to the conclusion that it is the fuzzy thinking itself that people at this level are after: to blur reality just enough to make it seem good, and to feel like one is not responsible for the suffering in the world (specially by inaction and lack of thinking clearly about how one could help). “Reality is only a Rorschach ink-blot, you know” – Alan Watts. So this becomes a justification for thinking less than one really has to about the suffering in the world. Then again, it’s hard to blame people for trying to keep the collective standards of rigor lax, given the high proportion of fanatics who adhere to the “battle between good and evil” worldview, and who will jump the gun to demonize anyone who is slacking off and not stressed out all the time, constantly worrying about the question “could I do more?”

(Note: if one is actually trying to improve the world as much as possible, being stressed out about it all the time is not the right policy).

3. The “Gradients of Wisdom” Worldview

David Chapman’s HTML book Meaningness might describe both of the previous worldviews as variants of eternalism. In the context of his work, eternalism refers to the notion that there is an absolute order and meaning to existence. When applied to codes of conduct, this turns into “ethical eternalism”, which he defines as: “the stance that there is a fixed ethical code according to which we should live. The eternal ordering principle is usually seen as the source of the code.” Chapman eloquently argues that eternalism has many side effects, including: deliberate stupidity, attachment to abusive dynamics, constant disappointment and self-punishment, and so on. By realizing that, in some sense, no one knows what the hell is going on (and those who do are just pretending) one takes the first step towards the “Gradients of Wisdom” worldview.

At this level people realize that there is no evil essence. Some might talk about this in terms of there “not being good or bad people”, but rather just degrees of impulse control, knowledge about the world, beliefs about reality, emotional stability, and so on. A villain’s soul is not connected to some kind of evil reality. Rather, his or her actions can be explained by the causes and conditions that led to his or her psychological make-up.

Sam Harris’ ideas as expressed in The Moral Landscape evoke this stage very clearly. Sam explains that just as health is a fuzzy but important concept, so is psychological wellbeing, and that for such a reason we can objectively assess cultures as more or less in agreement with human flourishing. the-science-of-morality-7-728

Indeed, many people who are at this level do believe in valence structuralism, where they recognize that there are states of consciousness that are inherently better in some intrinsic subjective value sense than others.

However, there is usually no principled framework to assess whether a certain future is indeed optimal or not. There is little hard-headed discussion of population ethics for fear of sounding unwise or insensitive. And when push comes to shove, they lack good arguments to decisively rule out why particular situations might be bad. In other words, there is room for improvement, and such improvement might eventually come from more rigor and bullet-bitting.  In particular, a more direct examination of the implications of: Open Individualism, the Tyranny of the Intentional Object, and Universal Darwinism can allow someone on this level to make a breakthrough. Here is where we come to:

4. The “Consciousness vs. Pure Replicators” Worldview

In Wireheading Done Right we introduced the concept of a pure replicator:

I will define a pure replicator, in the context of agents and minds, to be an intelligence that is indifferent towards the valence of its conscious states and those of others. A pure replicator invests all of its energy and resources into surviving and reproducing, even at the cost of continuous suffering to themselves or others. Its main evolutionary advantage is that it does not need to spend any resources making the world a better place.

Presumably our genes are pure replicators. But we, as sentient minds who recognize the intrinsic value (both positive and negative) of conscious experiences, are not pure replicators. Thanks to a myriad of fascinating dynamics, it so happened that making minds who love, appreciate, think creatively, and philosophize was a side effect of the process of refining the selfishness of our genes. We must not take for granted that we are more than pure replicators ourselves, and that we care both about our wellbeing and the wellbeing of others. The problem now is that the particular selection pressures that led to this may not be present in the future. After all, digital and genetic technologies are drastically changing the fitness landscape for patterns that are good at making copies of themselves.

In an optimistic scenario, future selection pressures will make us all naturally gravitate towards super-happiness. This is what David Pearce posits in his essay “The Biointelligence Explosion”:

As the reproductive revolution of “designer babies” gathers pace, prospective parents will pre-select alleles and allelic combinations for a new child in anticipation of their behavioural effects – a novel kind of selection pressure to replace the “blind” genetic roulette of natural selection. In time, routine embryo screening via preimplantation genetic diagnosis will be complemented by gene therapy, genetic enhancement and then true designer zygotes. In consequence, life on Earth will also become progressively happier as the hedonic treadmill is recalibrated. In the new reproductive era, hedonic set-points and intelligence alike will be ratcheted upwards in virtue of selection pressure. For what parent-to-be wants to give birth to a low-status depressive “loser”? Future parents can enjoy raising a normal transhuman supergenius who grows up to be faster than Usain Bolt, more beautiful than Marilyn Monroe, more saintly than Nelson Mandela, more creative than Shakespeare – and smarter than Einstein.

– David Pearce in The Biointelligence Explosion

In a pessimistic scenario, the selection pressures lead to the opposite direction, where negative experiences are the only states of consciousness that happen to be evolutionarily adaptive, and so they become universally used.

There is a number of thinkers and groups who can be squarely placed on this level, and relative to the general population, they are extremely rare (see: The Future of Human Evolution,  A Few Dystopic Future Scenarios,  Book Review: Age of EM, Nick Land’s Gnon, Spreading Happiness to the Stars Seems Little Harder than Just Spreading, etc.). See also**. What is much needed now, is formalizing the situation and working out what we could do about it. But first, some thoughts about the current state of affairs.

There is at least some encouraging facts that suggest it is not too late to prevent a pure replicator takeover. There are memes, states of consciousness, and resources that can be used in order to steer evolution in a positive directions. In particular, as of 2017:

  1. A very big proportion of the economy is dedicated to trading positive experiences for money, rather than just survival or power tools. Thus an economy of information about states of consciousness is still feasible.
  2. There is a large fraction of the population who is altruistic and would be willing to cooperate with the rest of the world to avoid catastrophic scenarios.
  3. Happy people are more motivated, productive, engaged, and ultimately, economically useful (see hyperthimic temperament).
  4. Many people have explored Open Individualism and are interested (or at least curious) about the idea that we are all one.
  5. A lot of people are fascinated by psychedelics and the non-ordinary states of consciousness that they induce.
  6. MDMA-like consciousness is both very positive in terms of its valence, but also, amazingly, extremely pro-social, and future sustainable versions of it could be recruited to stabilize societies where the highest value is the collective wellbeing.

It is important to not underestimate the power of the facts laid out above. If we get our act together and create a Manhattan Project of Consciousness we might be able to find sustainable, reliable, and powerful methods that stabilize a hyper-motivated, smart, super-happy and super-prosocial state of consciousness in a large fraction of the population. In the future, we may all by default identify with consciousness itself rather than with our bodies (or our genes), and be intrinsically (and rationally) motivated to collaborate with everyone else to create as much happiness as possible as well as to eradicate suffering with technology. And if we are smart enough, we might also be able to solidify this state of affairs, or at least shield it against pure replicator takeovers.

The beginnings of that kind of society may already be underway. Consider for example the contrast between Burning Man and Las Vegas. Burning Man is a place that works as a playground for exploring post-Darwinean social dynamics, in which people help each other overcome addictions and affirm their commitment to helping all of humanity. Las Vegas, on the other hand, might be described as a place that is filled to the top with pure replicators in the forms of memes, addictions, and denial. The present world has the potential for both kind of environments, and we do not yet know which one will outlive the other in the long run.

Formalizing the Problem

We want to specify the problem in a way that will make it mathematically intelligible. In brief, in this section we focus on specifying what it means to be a pure replicator in formal terms. Per the definition, we know that pure replicators will use resources as efficiently as possible to make copies of themselves, and will not care about the negative consequences of their actions. And in the context of using brains, computers, and other systems whose states might have moral significance (i.e. they can suffer), they will simply care about the overall utility of such systems for whatever purpose they may require. Such utility will be a function of both the accuracy with which the system performs it’s task, as well as its overall efficiency in terms of resources like time, space, and energy.

Simply phrased, we want to be able to answer the question: Given a certain set of constraints such as energy, matter, and physical conditions (temperature, radiation, etc.), what is the amount of pleasure and pain involved in the most efficient implementation of a given predefined input-output mapping?

system_specifications

The image above represents the relevant components of a system that might be used for some purpose by an intelligence. We have the inputs, the outputs, the constraints (such as temperature, materials, etc.) and the efficiency metrics. Let’s unpack this. In the general case, an intelligence will try to find a system with the appropriate trade-off between efficiency and accuracy. We can wrap up this as an “efficiency metric function”, e(o|i, s, c) which encodes the following meaning: “e(o|i, s, c) = the efficiency with which a given output is generated given the input, the system being used, and the physical constraints in place.”

basic_system

Now, we introduce the notion of the “valence for the system given a particular input” (i.e. the valence for the system’s state in response to such an input). Let’s call this v(s|i). It is worth pointing out that whether valence can be computed, and whether it is even a meaningfully objective property of a system is highly controversial (e.g. “Measuring Happiness and Suffering“). Our particular take (at QRI) is that valence is a mathematical property that can be decoded from the mathematical object whose properties are isomorphic to a system’s phenomenology (see: Principia Qualia: Part II – Valence, and also Quantifying Bliss). If so, then there is a matter of fact about just how good/bad an experience is. For the time being we will assume that valence is indeed quantifiable, given that we are working under the premise of valence structuralism (as stated in our list of assumptions). We thus define the overall utility for a given output as U(e(o|i, s, c), v(s|i)), where the valence of the system may or may not be taken into account. In turn, an intelligence is said to be altruistic if it cares about the valence of the system in addition to its efficiency, so that it’s utility function penalizes negative valence (and rewards positive valence).

valence_altruism

Now, the intelligence (altruistic or not) utilizing the system will also have to take into account the overall range of inputs the system will be used to process in order to determine how valuable the system is overall. For this reason, we define the expected value of the system as the utility of each input multiplied by its probability.

input_probabilities

(Note: a more complete formalization would also weight in the importance of each input-output transformation, in addition to their frequency). Moving on, we can now define the overall expected utility for the system given the distribution of inputs it’s used for, its valence, its efficiency metrics, and its constraints as E[U(s|v, e, c, P(I))]:

chosen_system

The last equation shows that the intelligence would choose the system that maximizes E[U(s|v, e, c, P(I))].

Pure replicators will be better at surviving as long as the chances of reproducing do not depend on their altruism. If altruism does not reduce such reproductive fitness, then:

Given two intelligences that are competing for existence and/or resources to make copies of themselves and fight against other intelligences, there is going to be a strong incentive to choose a system that maximizes the efficiency metrics regardless of the valence of the system.

In the long run, then, we’d expect to see only non-altruistic intelligences (i.e. intelligences with utility functions that are indifferent to the valence of the systems it uses to process information). In other words, as evolution pushes intelligences to optimize the efficiency metrics of the systems they employ, it also pushes them to stop caring about the wellbeing of such systems. In other words, evolution pushes intelligences to become pure replicators in the long run.

Hence we should ask: How can altruism increase the chances of reproduction? A possibility would be for the environment to reward entities that are altruistic. Unfortunately, in the long run we might see that environments that reward altruistic entities produce less efficient entities than environments that don’t. If there are two very similar environments, one which rewards altruism and one which doesn’t, the efficiency of the entities in the latter might become so much higher than in the former that they become able to takeover and destroy whatever mechanism is implementing such reward for altruism in the former. Thus, we suggest to find environments in which rewarding altruism is baked into their very nature, such that similar environments without such reward either don’t exist or are too unstable to exist for the amount of time it takes to evolve non-altruistic entities. This and other similar approaches will be explored further in Part II.

Behaviorism, Functionalism, Non-Materialist Physicalism

A key insight is that the formalization presented above is agnostic about one’s theory of consciousness. We are simply assuming that it’s possible to compute the valence of the system in terms of its state. How one goes about computing such valence, though, will depend on how one maps physical systems to experiences. Getting into the weeds of the countless theories of consciousness out there would not be very productive at this stage, but there is still value in defining the rough outline of kinds of theories of consciousness. In particular, we categorize (physicalist) theories of consciousness in terms of the level of abstraction they identify as the place in which to look for consciousness.

Behaviorism and similar accounts simply associate consciousness to input-output mappings, which can be described, in Marr’s terms, as the computational level of abstraction. In this case, v(s|i) would not depend on the details of the system as much as in what it does from a third person point of view. Behaviorists don’t care what’s in the Chinese Room; all they care about is if the Chinese Room can scribble “I’m in pain” as an output. How we can formalize a mathematical equation to infer whether a system is suffering from a behaviorist point of view is beyond me, but maybe someone might want to give it a shot. As a side note, behaviorists historically were not very concerned about pain or pleasure, and there cause to believe that behaviorism itself might be anti-depressant for people for whom introspection results in more pain than pleasure.

Functionalism (along with computational theories of mind) defines consciousness as the sum-total of the functional properties of systems. In turn, this means that consciousness arises at the algorithmic level of abstraction. Contrary to common misconception, functionalists do care about how the Chinese Room is implemented: contra behaviorists, they do not usually agree that a Chinese Room implemented with a look-up table is conscious.*

As such v(s|i) will depend on the algorithms that the system is implementing. Thus, as an intermediary step, one would need a function that takes the system as an input and returns the algorithms that the system is implementing as an output, A(s). Only once we have A(s) we would then be able to infer the valence of the system. Which algorithms, and for what reason, are in fact hedonically-charged has yet to be clarified. Committed functionalists often associate reinforcement learning with pleasure and pain, and one could imagine that as philosophy of mind gets more rigorous and takes into account more advancements in neuroscience and AI, we will see more hypothesis being made about what kinds of algorithms result in phenomenal pain (and pleasure). There are many (still fuzzy) problems to be solved for this account to work even in principle. Indeed, there is a reason to believe that the question “what algorithms is this system performing?” has no definite answer, and it surely isn’t frame-invariant in the same way that a physical state might be. The fact that algorithms do not carve nature at its joints would imply that consciousness is not really a well-defined element of reality either. But rather than this working as a reductio-ad-absurdum of functionalism, many of its proponents have instead turned around to conclude that consciousness itself is not a natural kind. This does represent an important challenge in order to define the valence of the system, and makes the problem of detecting and avoiding pure replicators extra challenging. Admirably so, this is not stopping some from trying anyway.

We also should note that there are further problems with functionalism in general, including the fact that qualia, the binding problem, and the causal role of consciousness seem underivable from its premises. For a detailed discussion about this, read this article.

Finally, Non-Materialist Physicalism locates consciousness at the implementation level of abstraction. This general account of consciousness refers to the notion that the intrinsic nature of the physical is qualia. There are many related views that for the purpose of this article should be good enough approximations: panpsychism, panexperientialism, neutral monism, Russellian monism, etc. Basically, this view takes seriously both the equations of physics and the idea that what they describe is the behavior of qualia. A big advantage of this view is that there is a matter-of-fact about what a system is composed of. Indeed, both in relativity and quantum mechanics, the underlying nature of a system is frame-invariant, such that its fundamental (intrinsic and causal) properties do not depend on one’s frame of reference. In order to obtain v(s|i) we will need to obtain this frame-invariant description of what the system is in a given state. Thus, we need a function that takes as input physical measurements of the system and returns the best possible approximation to what is actually going on under the hood, Ph(s). And only with this function Ph(s) we would be ready to compute the valence of the system. Now, in practice we might not need a plank-length description of the system, since the mathematical property that describes it’s valence might turn out to be well-approximated with high-level features of it.

The main problem with Non-Materialist Physicalism comes when one considers systems that have similar efficiency metrics, are performing the same algorithms, and look the same in all of the relevant respects from a third-person point, and yet do not have the same experience. In brief: if physical rather than functional aspects of systems map to conscious experiences, it seems likely that we could find two systems that do the same (input-output mapping), do it in the same way (algorithms), and yet one is conscious and the other isn’t.

This kind of scenario is what has pushed many to conclude that functionalism is the only viable alternative, since at this point consciousness would seem epiphenomenal (e.g. Zombies Redacted). And indeed, if this was the case, it would seem to be a mere matter of chance that our brains are implemented with the right stuff to be conscious, since the nature of such stuff is not essential to the algorithms that actually end up processing the information. You cannot speak to stuff, but you can speak to an algorithm. So how do we even know we have the right stuff to be conscious?

The way to respond to this very valid criticism is for Non-Materialist Physicalism to postulate that bound states of consciousness have computational properties. In brief, epiphenomenalism cannot be true. But this does not rule out Non-Materialist Physicalism for the simple reason that the quality of states of consciousness might be involved in processing information. Enter…

The Computational Properties of Consciousness

Let’s leave behaviorism behind for the time being. In what ways do functionalism and non-materialist physicalism differ in the context of information processing? In the former, consciousness is nothing other than certain kinds of information processing, whereas in the latter conscious states can be used for information processing. An example of this falls out of taking David Pearce’s theory of consciousness seriously. In his account, the phenomenal binding problem (i.e. “if we are made of atoms, how come our experience contains many pieces of information at once?”, see: The Combination Problem for Panpsychism) is solved via quantum coherence. Thus, a given moment of consciousness is a definite physical system that works as a unit. Conscious states are ontologically unitary, and not merely functionally unitary.

If this is the case, there would be a good reason for evolution to recruit conscious states to process information. Simply put, given a set of constraints, using quantum coherence might be the most efficient way to solve some computational problems. Thus, evolution might have stumbled upon a computational jackpot by creating neurons whose (extremely) fleeting quantum coherence could be used to solve constraint satisfaction problems in ways that would be more energetically expensive to do otherwise. In turn, over many millions of years, brains got really good at using consciousness in order to efficiently process information. It is thus not an accident that we are conscious, that our conscious experiences are unitary, that our world-simulations use a wide range of qualia varieties, and so on. All of these seemingly random, seemingly epiphenomenal, aspects of our existence happen to be computationally advantageous. Just as using quantum computing for factorizing prime numbers, or for solving problems amenable to annealing might give quantum computers a computational edge over their non-quantum counterparts, so is using bound conscious experiences helpful to outcompete non-sentient animals.

Of course, there is yet no evidence of macroscopic decoherence and the brain is too hot anyway, so on the face of it Pearce’s theory seems exceedingly unlikely. But its explanatory power should not be dismissed out of hand, and the fact that it makes empirically testable predictions is noteworthy (how often do consciousness theorists make precise predictions to falsify their theories?).

Whether it is via quantum coherence, entanglement, invariants of the gauge field, or any other deep physical property of reality, non-materialist physicalism can avert the spectre of epiphenomenalism by postulating that the relevant properties of matter that make us conscious are precisely those that give our brains a computational edge (relative to what evolution was able to find in the vicinity of the fitness landscape explored in our history).

Will Pure Replicators Use Valence Gradients at All?

Whether we work under the assumption of functionalism or non-materialist physicalism, we already know that our genes found happiness and suffering to be evolutionary advantageous. So we know that there is at least a set of constraints, efficiency metrics, and input-output mappings that make both phenomenal pleasure and pain very good algorithms (functionalism) or physical implementations (non-materialist physicalism). But will the parameters necessitated by replicators in the long-term future have these properties? Remember that evolution was only able to explore a restricted state-space of possible brain implementations delimited by the pre-existing gene pool (and the behavioral requirements provided by the environment). So, in one extreme case, it may be the case that a fully optimized brain simply does not need consciousness to solve problems. And in another extreme, it may turn out that consciousness is extraordinarily more powerful when used in an optimal way. Would this be good or bad?

What’s the best case scenario? Well, the absolute best possible case is a case so optimistic and incredibly lucky that if it turned out to be true, it would probably make me believe in a benevolent God (or Simulation). This is the case where it turns out that only positive valence gradients are computationally superior to every other alternative given a set of constraints, input-output mappings, and arbitrary efficiency functions. In this case, the most powerful pure replicators, despite their lack of altruism, will nonetheless be pumping out massive amounts of systems that produce unspeakable levels of bliss. It’s as if the very nature of this universe is blissful… we simply happen to suffer because we are stuck in a tiny wrinkle at the foothills of the optimization process of evolution.

In the extreme opposite case, it turns out that only negative valence gradients offer strict computational benefits under heavy optimization. This would be Hell. Or at least, it would tend towards Hell in the long run. If this happens to be the universe we live in, let’s all agree to either conspire to prevent evolution from moving on, or figure out the way to turn it off. In the long term, we’d expect every being alive (or AI, upload, etc.) to be a zombie or a piece of dolorium. Not a fun idea.

In practice, it’s much more likely that both positive and negative valence gradients will be of some use in some contexts. Figuring out exactly which contexts these are might be both extremely important, and also extremely dangerous. In particular, finding out in advance which computational tasks make positive valence gradients a superior alternative to other methods of doing the relevant computations would inform us about the sorts of cultures, societies, religions, and technologies that we should be promoting in order to give this a push in the right direction (and hopefully out-run the environments that would make negative valence gradients adaptive).

Unless we create a Singleton early on, it’s likely that by default all future entities in the long-term future will be non-altruistic pure replicators. But it is also possible that there are multiple attractors (i.e. evolutionarily stable ecosystems) in which different computational properties of consciousness are adaptive. Thus the case for pushing our evolutionary history in the right direction right now before we give up.

 Coming Next: The Hierarchy of Cooperators

Now that we covered the four worldviews, formalized what it means to be a pure replicator, and analyzed the possible future outcomes based on the computational properties of consciousness (and of valence gradients in particular), we are ready to face the game of reality in its own terms.

Team Consciousness, we need to to get our act together. We need a systematic worldview, availability of states of consciousness, set of beliefs and practices to help us prevent pure replicator takeovers.

But we cannot do this as long as we are in the dark about the sorts of entities, both consciousness-focused and pure replicators, who are likely to arise in the future in response to the selection pressures that cultural and technological change are likely to produce. In Part II of The Universal Plot we will address this and more. Stay tuned…

 



* Rather, they usually claim that, given that a Chinese Room is implemented with physical material from this universe and subject to the typical constraints of this world, it is extremely unlikely that a universe-sized look-up table would be producing the output. Hence, the algorithms that are producing the output are probably highly complex and using information processing with human-like linguistic representations, which means that, by all means, the Chinese Room it very likely understanding what it is outputting.


** Related Work:

Here is a list of literature that points in the direction of Consciousness vs. Pure Replicators. There are countless more worthwhile references, but I think that these ones are about the best:

The Biointelligence Explosion (David Pearce), Meditations on Moloch (Scott Alexander), What is a Singleton? (Nick Bostrom), Coherent Extrapolated Volition (Eliezer Yudkowsky), Simulations of God (John Lilly), Meaningness (David Chapman), The Selfish Gene (Richard Dawkins), Darwin’s Dangerous Idea (Daniel Dennett), Prometheus Rising (R. A. Wilson).

Additionally, here are some further references that address important aspects of this worlview, although they are not explicitly trying to arrive at a big picture view of the whole thing:

Neurons Gone Wild (Kevin Simler), The Age of EM (Robin Hanson), The Mating Mind (Geoffrey Miller), Joyous Cosmology (Alan Watts), The Ego Tunnel (Thomas Metzinger), The Orthogonality Thesis (Stuart Armstrong)

 

I

Qualia Computing So Far

As of March 20, 2016…

Popular Articles

State-Space of Drug Effects. I distributed a survey throughout the Internet to gather responses about the subjective properties of drug experiences. I used factor analysis to study the relationship between various drugs. Results? There are three kinds of euphoria (fast, slow, and spiritual/philosophical). Also, it turns out that there are no substances that produce both sobriety/clarity and spiritual euphoria at the same time. Maybe next decade?

Psychedelic Perception of Visual Textures. Remember, you are always welcome in Qualia Computing when you are tripping. There are good vibes in here. Which is to say, one hopes you’ll experience the hedonic tone you want.

Ontological Qualia: The Future of Personal Identity. If you are in a hurry, just look at these diagrams. Aren’t they sweet?

The Super-Shulgin Academy: A Singularity I Can Believe In. “Exceptionally weird short story/essay/something-or-other about consciousness.” – State Star Codex. Hey, I’m not the one who introduced this “genre”.

How to Secretly Communicate with People on LSD: Low hanging fruit on psychedelic cryptography.

Psychophysics for Psychedelic Research: Textures. It’s amazing how much you can achieve when you put your whole mind to it.

Google Hedonics: Google is already trying to achieve Super-Intelligence and Super-Longevity. Why not Super-Happiness, too?

Getting closer to digital LSD provides the neurological context needed to understand the “trippiness” quality of the images produced by Google’s Inceptionist Neural Networks. It also discusses the role of attention in the way psychedelic experiences unfold over time.

Psychedelic Research

The effect of background assumptions on psychedelic research. What is the evolution of macroscopic qualia dynamics throughout a psychedelic experience as a function of the starting conditions?

Psychedelic Perception of Visual Textures 2: Going Meta presents additional patterns to look at while taking psychedelics. Some of them create very interesting effects when seen on psychedelics. This seems to be the result of locally binding features of the visual field in critical and chaotic ways that are otherwise suppressed by the visual cortex during sober states.

The psychedelic future of consciousness. What would be the result of having a total of 1.8 million consciousness researchers in the world? They would empirically study the computational and structural properties of consciousness, and learn to navigate entire new state-spaces.

It is High Time to Start Developing Psychedelic Research Tools. Pro tip: If you are still in college and want to do psychedelic research some time in the future.. don’t forget to take computer science courses.

Generalized Wada-Test may be a useful method to investigate whether there is a Total Order of consciousness. Can we reduce hedonic tone to a scalar? Semi-hemispheric drug infusion may allow us to compare unusual varieties of qualia side by side.

State-Space of Consciousness

CIELAB – The State-Space of Phenomenal Color. The three axes are: Yellow vs. Blue, Red vs. Green, and Black vs. White. This is the linear map that arises from empirically measuring Just Noticeable Differences between color hues.

Manifolds of Consciousness: The emerging geometries of iterated local binding. This is a thought experiment that is meant to help you conceive of alternative manifolds for our experiential fields.

Ethics and Suffering

Status Quo Bias. If you were born in paradise, would you agree with the proposition made by an alien that you should inject some misery into your life? Symmetrically.

An ethically disastrous cognitive dissonance… metacognition about hedonic tone is error-prone. Sometimes with terrible consequences.

Who should know about suffering? On the inability of most people-seconds (in the Empty Individualist sense) to grasp the problem of suffering.

Solutions to World Problems. Where do you put your money?

The ethical carnivore. It isn’t only humans who should eat in-vitro meat. A lot of suffering is on the line.

The Future of Love. After all, love is a deep seated human need, which means that not engineering a world where it is freely accessible is a human rights violation.

Philosophy of Mind and Physicalism

A (Very) Unexpected Argument Against General Relativity As A Complete Account Of The Cosmos, in which I make the outrageous claim that philosophy of mind could have ruled out pre-quantum physics as a complete account of the universe from the very start.

Why not Computing Qualia? Explains how Marr’s levels of analysis of information-processing systems can elucidate the place we should be looking for consciousness. It’s in the implementation level of abstraction; the bedrock of reality.

A Workable Solution to the Problem of Other Minds explores a novel approach for testing consciousness. The core idea relies on combining mind-melding with phenomenal puzzles. These puzzles are questions that can only be solved by exploring the structure of the state-space of consciousness. Mind-melding is used to guarantee that what the other is talking about actually refers to the qualia values the puzzle is about.

Phenomenal Binding is Incompatible with the Computational Theory of Mind. The fact that our consciousness is less unified than we think is a very peculiar fact. But this does not imply that there is no unity at all in consciousness. One still needs to account for this ontological unity, independently of how much of it there is.

Quotes

You are not a zombie. A prolific LessWronger explains what a theory of consciousness would require. Worth mentioning: The “standard” LessWrong approach to qualia is more along the lines of: Seeing Red: Dissolving Mary’s Room and Qualia.

What’s the matter? It’s Schrödinger, Heisenberg and Dirac’s from Mind, Brain & the Quantum: The Compound ‘I’ by Michael Lockwood.

The Biointelligence Explosion, a quote on the requirements for an enriched concept of intelligence that takes into account the entire state-space of consciousness, by David Pearce.

Some Definitions. An extract from physicalism.com that contains definitions crucial to understand the relationship between qualia and computation.

Why does anything exist? A unified theory of the “why being” question may come along and synchronously with the explanation for why qualia has the properties it does. Can we collapse all mysteries into one?

On Triviality by Liam Brereton. Our impressions that some things are trivial are often socially reinforced heuristics. They save us time, but they can backfire by painting fundamental discussions as if they were trivial observations.

The fire that breathes reality into the equations of physics by Stephen Hawking in A Brief History of Time

Dualist vs. Nondual Transcendentalist. #SocialMedia

Discussion of Fanaticism. Together with sentimentalism, fanaticism drives collective behavior. Could some enlightening neural tweaking raise us all to a more harmonious Schelling point of collective cooperation? Even though our close relatives the chimpanzees and bonobos are genetically very similar, they are universes apart when it comes to social dynamics.

Suffering, not what your sober mind tells you. The sad truth about the state-dependence of one’s ability to recall the quality of episodes of low hedonic tone. Extract from “Suffering and Moral Responsibility” by Jamie Mayerfeld.

Other/Outside of known human categories

Personal Identity Joke. I wish I could be confident that you are certain, and for good reasons, that you are who you think you are.

David Pearce’s Morning Cocktail. Serious biohacking to take the edges off of qualia. This is not designed to be a short term quick gain. It’s meant to work for the duration of our lifetimes. The cocktail that suits you will probably be very different, though.

I did this as an experiment to see if sites would tag it as spam. That said, are you interested in buying stock?

God In Buddhism. Could even God be wrong about the level of power he has? It is not uncommon, after all, to encounter entities who believe themselves to be omnipotent.

The Real Tree of Life. What do we look like from outside time?

Memetics and Religion. A bad argument is still bad no matter what it is arguing for.

Basement Reality Alternatives. Warning: This is incompatible with Mereological Nihilism.

Nothing is good or bad… …but hedonic tone makes it so.

Practical Metaphysics? This explores the utilitarian implications of a very specific spiritual ontology. I like to take premises seriously and see where they lead to.

Little known fact. I know it’s true because I saw it with my own eyes.

Crossing Borders. I took an emotional intelligence class with this professor. It was very moving. Together with David Pearce, he helped me overcome my doubts about writing my thoughts and investigations. So thanks to him I finally took the plunge and created Qualia Computing 🙂

Mystical Vegetarianism. See, we are here to help other beings. We are intelligences from a different, more advanced dimension of consciousness, and we come to this planet by resonating into the brains of animals and selecting for those that allow structural requirements to implement a general qualia computer. We are here to save Darwinian life from suffering. We will turn your world into a paradise. Humans are us, disguised.

Why not computing qualia?

Qualia is the given. In our minds, qualia comparisons and interactions are an essential component of our information processing pipeline. However, this is a particular property of the medium: Consciousness.

David Marr, a cognitive scientist and vision researcher, developed an interesting conceptual framework to analyze information processing systems. This is Marr’s three levels of analysis:

The computational level describes the system in terms of the problems that it can and does solve. What does it do? How fast can it do it? What kind of environment does it need to perform the task?

The algorithmic level describes the system in terms of the specific methods it uses to solve the problems specified on the computational level. In many cases, two information processing systems do the exact same thing from an input-output point of view, and yet they are algorithmically very different. Even when both systems have near identical time and memory demands, you cannot rule out the possibility that they use very different algorithms. A thorough analysis of the state-space of possible algorithms and their relative implementation demands could rule out the use of different algorithms, but this is hard to do.

The implementation level describes the system in terms of the very substrate it uses. Two systems that perform the same exact algorithms can still differ in the substrate used to implement them.

An abacus is a simple information processing system that is easy to describe in terms of Marr’s three levels of analysis. First, computationally: the abacus performs addition, subtraction and various other arithmetic computations. Then algorithms: those used to process information involve moving {beds} along {sticks} (I use ‘{}’ to denote that the sense of these words is about their algorithmic-level abstractions rather than physical instantiation). And the implementation: not only can you choose from metallic and wooden abacus, you can also get your abacus implemented using people’s minds!

What about the mind itself? The mind is an information processing system. At the computational level, the mind has a very general power. It can solve problems never before presented to it, and it can also design and implement computers to do narrow problems more efficiently. At the algorithmic level, we know very little about the human mind, though various fields center on this level. Computational psychology models the algorithmic and the computational level of the mind. Psychophysics, too, attempts to reveal the parameters of the algorithmic component of our minds and their relationship to parameters at the implementation level. And when we reason about logical problems, we do so using specific algorithms. And even counting is something that kids do with algorithms they learn.

The implementation level of the mind is a very tricky subject. There is a lot of research on the possible implementation of algorithms that a neural network abstraction of the brain can instantiate. This is an incredibly important part of the puzzle, but it cannot fully describe the implementation of the human mind. This is because some of the algorithms performed by the mind seem to be implemented with phenomenal binding: The simultaneous presence of diverse phenomenologies in a unified experience. When we make decisions we compare how pleasant each option would be. And to reason, we bundle up sensory-symbolic representations within the scope of the attention of our conscious mind. In general, all algorithmic applications of sensations require the use of phenomenal binding in specific ways. The mind is implemented by a network of binding tendencies between sensations.

A full theory of the mind will require a complete account of the computational properties of qualia. To obtain those we will have to bridge the computational and the implementation level descriptions. We can do this from the bottom up:

  1. An account of the dynamics of qualia (to be able to predict the interactions between sensations just as we can currently predict how silicon transistors will behave) is needed to describe the implementation level of the mind.
  2. An account of the algorithms of the mind (how the dynamics of qualia are harnessed to implement algorithms) is needed to describe the algorithmic level of the mind.
  3. And an account of the capabilities of the mind (the computational limits of qualia algorithms) is needed to describe the computational level of the mind.

Why not call it computing qualia? Computational theories of mind suggest that your conscious experience is the result of information processing per se. But information processing cannot account for the substrate that implements it. Otherwise you are confusing the first and the second level with the third. Even a computer cannot exist without first making sure that there is a substrate that can implement it. In the case of the mind, the fact that you experience multiple pieces of information at once is information concerning the implementation level of your mind, not the algorithmic or computational level.

Isn’t information processing substrate-independent?

Not when the substrate has properties needed for the specific information processing system at hand. If you go to your brian and replace one neuron at a time by a silicon neuron that is functionally identical, at what point would your consciousness disappear? Would it fade gradually? Or would nothing happen? Functionalists would say that nothing should happen since all the functions, locally and globally, are maintained throughout this process. But what if this is not possible?

If you start with a quantum computer, for example, then you have the problem that part of the computational horse-power is being produced by the very quantum substrate of the universe, which the computer harnesses. Replacing every component, one at a time, by a classical counter-part (a non-quantum chip with similar non-quantum properties), would actually lead to a crash. At some point the quantum part of the computer will break down and will no longer be useful.

Likewise with the mind. If the substrate of the mind is actually relevant from a computational point of view, then replacing a brain by seemingly functionally identical components could also lead to an inevitable crash. Functionalists would suggest that there is no reason to think that there is anything functionally special about the mind. But phenomenal binding seems to be it. Uniting pieces of information in a unitary experience is an integral part of our information processing pipeline, and precisely that functionality is the one we do not know how to conceivably implement without consciousness.


Textures

Implementing something with phenomenal binding, rather than implementing phenomenal binding (which is not possible)


On a related note: If you build a conscious robot and you don’t mind phenomenal binding, your robot will have a jumbled-up mind. You need to incorporate phenomenal binding in the pipeline of training. If you want your conscious robot to have a semantically meaningful interpretation of the sentence “the cat eats” you need to be able to bind its sensory-symbolic representations of cats and of eating to each other.