Does Full-Spectrum Superintelligence Entail Benevolence?

Excerpt from: The Biointelligence Explosion by David Pearce


The God-like perspective-taking faculty of a full-spectrum superintelligence doesn’t entail distinctively human-friendliness any more than a God-like superintelligence could promote distinctively Aryan-friendliness. Indeed it’s unclear how benevolent superintelligence could want omnivorous killer apes in our current guise to walk the Earth in any shape or form. But is there any connection at all between benevolence and intelligence? Pre-reflectively, benevolence and intelligence are orthogonal concepts. There’s nothing obviously incoherent about a malevolent God or a malevolent – or at least a callously indifferent – Superintelligence. Thus a sceptic might argue that there is no link whatsoever between benevolence – on the face of it a mere personality variable – and enhanced intellect. After all, some sociopaths score highly on our [autistic, mind-blind] IQ tests. Sociopaths know that their victims suffer. They just don’t care.

However, what’s critical in evaluating cognitive ability is a criterion of representational adequacy. Representation is not an all-or-nothing phenomenon; it varies in functional degree. More specifically here, the cognitive capacity to represent the formal properties of mind differs from the cognitive capacity to represent the subjective properties of mind. Thus a notional zombie Hyper-Autist robot running a symbolic AI program on an ultrapowerful digital computer with a classical von Neumann architecture may be beneficent or maleficent in its behaviour toward sentient beings. By its very nature, it can’t know or care. Most starkly, the zombie Hyper-Autist might be programmed to convert the world’s matter and energy into heavenly “utilitronium” or diabolical “dolorium” without the slightest insight into the significance of what it was doing. This kind of scenario is at least a notional risk of creating insentient Hyper-Autists endowed with mere formal utility functions rather than hyper-sentient full-spectrum superintelligence. By contrast, full-spectrum superintelligence does care in virtue of its full-spectrum representational capacities – a bias-free generalisation of the superior perspective-taking, “mind-reading” capabilities that enabled humans to become the cognitively dominant species on the planet. Full-spectrum superintelligence, if equipped with the posthuman cognitive generalisation of mirror-touch synaesthesia, understands your thoughts, your feelings and your egocentric perspective better than you do yourself.

Could there arise “evil” mirror-touch synaesthetes? In one sense, no. You can’t go around wantonly hurting other sentient beings if you feel their pain as your own. Full-spectrum intelligence is friendly intelligence. But in another sense yes, insofar as primitive mirror-touch synaesthetes are prey to species-specific cognitive limitations that prevent them acting rationally to maximise the well-being of all sentience. Full-spectrum superintelligences would lack those computational limitations in virtue of their full cognitive competence in understanding both the subjective and the formal properties of mind. Perhaps full-spectrum superintelligences might optimise your matter and energy into a blissful smart angel; but they couldn’t wantonly hurt you, whether by neglect or design.

More practically today, a cognitively superior analogue of natural mirror-touch synaesthesia should soon be feasible with reciprocal neuroscanning technology – a kind of naturalised telepathy. At first blush, mutual telepathic understanding sounds a panacea for ignorance and egotism alike. An exponential growth of shared telepathic understanding might safeguard against global catastrophe born of mutual incomprehension and WMD. As the poet Henry Wadsworth Longfellow observed, “If we could read the secret history of our enemies, we should find in each life sorrow and suffering enough to disarm all hostility.” Maybe so. The problem here, as advocates of Radical Honesty soon discover, is that many Darwinian thoughts scarcely promote friendliness if shared: they are often ill-natured, unedifying and unsuitable for public consumption. Thus unless perpetually “loved-up” on MDMA or its long-acting equivalents, most of us would find mutual mind-reading a traumatic ordeal. Human society and most personal relationships would collapse in acrimony rather than blossom. Either way, our human incapacity fully to understand the first-person point of view of other sentient beings isn’t just a moral failing or a personality variable; it’s an epistemic limitation, an intellectual failure to grasp an objective feature of the natural world. Even “normal” people share with sociopaths this fitness-enhancing cognitive deficit. By posthuman criteria, perhaps we’re all quasi-sociopaths. The egocentric delusion (i.e. that the world centres on one’s existence) is genetically adaptive and strongly selected for over hundreds of millions of years. Fortunately, it’s a cognitive failing amenable to technical fixes and eventually a cure: full-spectrum superintelligence. The devil is in the details, or rather the genetic source code.


(Featured Image: Source)

3 comments

  1. Pingback: Every Qualia Computing Article Ever | Qualia Computing
  2. G Gordon Worley III · August 25, 2019

    I think this is likely a mistaken core because it relies on modeling other kinds to work the way in a superintelligence the way it works in humans, viz. by reusing the same machinery to model the self to model others. Without that I don’t see a reason to get empathy in the way described here and then the overall argument fails.

    • algekalipso · August 25, 2019

      It is important to distinguish between causal power and full-spectrum intelligence. A zombie AI can indeed “take over the world” if we mean by that deciding what our forward light-cone looks like. But in some sense its power would be limited without full-spectrum superintelligence. Specifically, it would not be able to investigate the myriad textures of qualia that have not been recruited by natural selection. A partial conscious superintelligence also could have profound causal power, and even be able to harness novel states of consciousness for its purpose. But an intelligence is not full-spectrum unless it is able to understand every subjective point of view, and “understanding”, by the criterion of representational adequacy, implies understanding as well, if not better, than what we are capable of ourselves.

      A full-spectrum superintelligence also is not deceived by implicit views about personal identity, such as Closed Individualism, which means that if it is rational, it will aim for the wellbeing of sentience in general.

      This is not an argument for “not worrying about insentient AI”, because those AIs can nonetheless still be very causally important. Rather, the point here is to ground what a more expansive understanding of intelligence entails.

Leave a Reply