Is the Orthogonality Thesis Defensible if We Assume Both Valence Realism and Open Individualism?

Ari Astra asks: Is the Orthogonality Thesis Defensible if We Assume Both “Valence Realism” and Open Individualism?


Ari’s own response: I suppose it’s contingent on whether or not digital zombies are capable of general intelligence, which is an open question. However, phenomenally bound subjective world simulations seem like an uncharacteristic extravagance on the part of evolution if non-sphexish p-zombie general intelligence is possible.

Of course, it may be possible, but just not reachable through Darwinian selection. But the fact that a search process as huge as evolution couldn’t find it and instead developed profoundly sophisticated phenomenally bound subjectivity is (possibly strong) evidence against the proposition that zombie AGI is possible (or likely to be stumbled on by accident).

If we do need phenomenally bound subjectivity for non-sphexish intelligence and minds ultimately care about qualia valence – and confusedly think that they care about other things only when they’re below a certain intelligence (or thoughtfulness) level – then it seems to follow that smarter than human AGIs will converge on valence optimization.

If OI is also true, then smarter than human AGIs will likely converge on this as well – since it’s within the reach of smart humans – and this will plausibly lead to AGIs adopting sentience in general as their target for valence optimization.

Friendliness may be built into the nature of all sufficiently smart and thoughtful general intelligence.

If we’re not drug-naive and we’ve conducted the phenomenological experiment of chemically blowing open the reducing valves that keep “mind at large” out and that filteratively shape hominid consciousness, we know by direct acquaintance that it’s possible to hack our way to more expansive awareness.

We shouldn’t discount the possibility that AGI will do the same simply because the idea is weirdly genre bending. Whatever narrow experience of “self” AGI starts with in the beginning, it may quickly expand out of.


Michael E. Johnson‘s response: The orthogonality thesis seems sound from ‘far mode’ but always breaks down in ‘near mode’. One way it breaks down is in implementation: the way you build an AGI system will definitely influence what it tends to ‘want’. Orthogonality is a leaky abstraction in this case.

Another way it breaks down is that the nature and structure of the universe instantiates various Schelling points. As you note, if Valence Realism is true, then there exists a pretty big Schelling point around optimizing that. Any arbitrary AGI would be much more likely to optimize for (and coordinate around) optimizing for positive qualia than, say, paperclips. I think this may be what your question gets at.

Coordination is also a huge question. You may have read this already, but worth pointing to: A new theory of Open Individualism.

To collect some threads- I’d suggest that much of the future will be determined by the coordination capacity and game-theoretical equilibriums between (1) different theories of identity, and (2) different metaphysics.

What does ‘metaphysics’ mean here? I use ‘metaphysics’ as shorthand for ‘the ontology people believe is ‘real’. What they believe we should look at when determining moral action.’

The cleanest typology for metaphysics I can offer is: some theories focus on computations as the thing that’s ‘real’, the thing that ethically matters – we should pay attention to what the *bits* are doing. Others focus on physical states – we should pay attention to what the *atoms* are doing. I’m on team atoms, as I note here: Against Functionalism.

My suggested takeaway: an open individualist who assumes computationalism is true (team bits) will have a hard time coordinating with an open individualist who assumes physicalism is true (team atoms) — they’re essentially running incompatible versions of OI and will compete for resources. As a first approximation, instead of three theories of personal identity – Closed Individualism, Empty Individualism, Open Individualism – we’d have six. CI-bits, CI-atoms, EI-bits, EI-atoms, OI-bits, OI-atoms. Whether the future is positive will be substantially determined by how widely and deeply we can build positive-sum moral trades between these six frames.

Maybe there’s further structure, if we add the dimension of ‘yes/no’ on Valence Realism. But maybe not– my intuition is that ‘team bits’ trends toward not being valence realists, whereas ‘team atoms’ tends toward it. So we’d still have these core six.

(I believe OI-atoms or EI-atoms is the ‘most true’ theory of personal identity, and that upon reflection and under consistency constraints agents will converge to these theories at the limit, but I expect all six theories to be well-represented by various agents and pseudo-agents in our current and foreseeable technological society.)

Toy Story 4 – 20 Movie Review

by Frank Yang (post)

Toy Story was my favorite movie growing up. I had the entire collection. The fact that the toys could think without a brain made me explore dualism, monism, the existence of God, and nofap (see next slides). Toy Story 4 is weird as fuck for a Disney movie. Due to the psychedelics Renaissance and mass awakening, people want everything to be increasingly trippy.

Forky is probably the weirdest Disney character of all time. It’s like the producers and writers got together in a room and brainstormed, “emm how can we strip a character down to its bare minimum materially to embody pure Being and Nothingness, and all he wanted was to go back to the Source”? The toys are on their way to realizations with Woody and Buzz self-inquiring about the distinction between the voice in their heads and the voice from their voice box. Woody eventually dissolved the part of his ego attached to having an owner by the end, but is still asleep because he still believes he is a toy.

Toy Story 15 will eventually be about enlightenment.

Buzz will be the first one to wake up since he always had a hunch that he wasn’t a toy and is obsessed with infinity.

Buzz screams at Woody, ‘You are not a toy, but infinite Consciousness.’

Maybe by Toy Story 18 both the toys and their owners can break through the layer of illusion that separates them and finally rejoice and communicate with each other after realizing they are made up of the same pixels, floating inside the same bubble of Divine imagination with limitless possibilities.

In Toy Story 20, every object inside the screen – toys and kids, trees, shoes and houses all combine force and congeal their pixels into One, exists the screen and merge with the audience in an Absolute orgy where all dualities collapse.

We’re left with an empty screen; the good old Witness. McDonald manufactures blank screen keychains to go along with happy meals and all the kids thought they got woke.

But when my grandson brings one home I’ll smash the little screens with a hammer “the Observer is the last stand against freedom!” I yell.
And then he was enlightened.
#toystory代購 #jumpman #thefappening

Three Interesting Math Problems to Work on While on LSD

  1. Let P be a simple polygon with n>3 sides. A simple polygon is a polygon that does not self-intersect, but it is not necessarily convex. Prove that no matter the shape of P, there is always a diagonal (a segment that connects two vertices of P without intersecting any of its sides) that divides P into two polygons, both of which have at least n/3 sides.
  2. Let A and B be two points in the plane. Using only a compass and a straightedge, find the point C which is the exact middle point between A and B. Now do the same thing, but using only a compass.
  3. There are 17 point-sized light-houses in the plane. Assume that each of these lighthouses can shine light in any direction with an angle of 2*pi/17. Prove that no matter the position of each lighthouse, it is always possible to choose the angles at which they shine their light such that every point in the plane is illuminated (point-sized lighthouses don’t cast shadows).

In Selective Enhancement of Specific Capacities Through Psychedelic Training, Willis Harman and James Fadiman outline the results of a study about the potential use of psychedelics for problem solving. In the study, scientists, engineers, mathematicians, and designers took either 100 micrograms of LSD or 200mg of mescaline and worked on a problem they were personally invested in and which they had not been able to solve for at least 3 months. According to Fadiman, 9 out of 10 participants came up with a solution to the problem that was validated by the participant’s professional colleagues.

The three problems above are not easy, but they are also not insanely difficult. If it means anything to you, their level of difficulty might be around that of a problem 1 or 4 of an IMO, with the advantage that you do not need any fancy math to solve them (high-school math is more than sufficient). I do not know if solving these problems is easier or harder on psychedelics, but I figure I would share them as possible Schelling points for “challenging math problems to think about while on psychedelics” to see if anyone reports benefits from such a setup. I personally like these problems, and I can assure you that they do have interesting and clever solutions. Assuming you are already planning on taking a psychedelic substance in the future: I would recommend trying to solve one of these problems for at least 1 hour while sober, and then setting aside at least 30 minutes (preferably 1 hour) while on a psychedelic and giving it your full attention. Please let me know if you either solve the problem or get an interesting insight from such an exercise. I am particularly curious to hear about *what aspects* of the psychedelic state seemed to be either beneficial or detrimental in solving these problems. Even if you do not solve the problem, you may be able to think about it in new ways and derive useful insights. Again, if you do so, let me know as well.