David Pearce on Longtermism

In answer to the Quora question “What does David Pearce think of Longtermism in the Effective Altruist movement?”


Future generations matter, but they can’t vote, they can’t buy things, they can’t stand up for their interests.”
(80,000 Hours)

In its short history, the Effective Altruist (EA) movement has passed from focus on maximally effective ways to tackle (1) existing sources of human and nonhuman animal suffering (“Giving What We Can”, etc) to (2) AI safety (the spectre of an imminent machine “Intelligence Explosion” that might turn us into the equivalent of paperclips) to (3) Longtermism: the key measure of the (dis)value of our actions today isn’t their effect on existing sentient beings, but rather how our actions affect the very long-run future. According to Longtermism, first-wave EA was myopic. Intelligent moral agents shouldn’t be unduly influenced by emotional salience either in space or in time. On various plausible assumptions, there will be vastly more sentient beings in the far future. Granted mastery of the pleasure-pain axis, their lives – or at least their potential lives – will be overwhelmingly if not exclusively positive. Failure to create such astronomical amounts of positive value would be an ethical catastrophe. So defeating existential risk trumps all else. Contemporary humanity is living at the “hinge of history”; human extinction or civilisational collapse would be the ultimate evil. Therefore, today’s effective altruists should aspire to act impartially to safeguard the potential interests of far future generations, even at the expense of our own.

To be fair, this potted history of effective altruism is simplistic. Some first-wave EAs are unconvinced by the Longtermist turn. Yet on a Longtermist analysis, what should today’s aspiring EAs specifically do? The EA policy ramifications of this proposed prioritization are murky. For an introduction to Longtermism, see 80,000 Hours’ Benjamin Todd’s “Future Generations and their Moral Significance” and Dylan Balfour’s “Longtermism: How Much Should We Care About the Far Future?” For a defence of “strong” longtermism, see William MacAskill and Hilary Greaves: “The case for strong longtermism”.

For a more sceptical perspective, see e.g. Vaden Masrani’s “A Case Against Strong Longtermism” or Phil Torres’ polemical “The Dangerous Ideas of ‘Longtermism’ and ‘Existential Risk’”.

My view?
Longtermist – in a sense. Just as science aspires to the view from nowhere, “the point of view of the universe”, aspiring effective altruists should in theory aim to do likewise. An absence of arbitrary spatio-temporal bias is built into a systematising utilitarian ethic – conceived as a theory of (dis)value. For sure, speculating about life even in the Year 3000 feels faintly absurd, let alone the far future. Yet I believe we can map out an ethical blueprint to safeguard the long-term future of sentience. Whether one is a secular Buddhist or a classical utilitarian, germline engineering can make life in our entire forward light-cone inherently blissful. Crudely, genes, not organisms, have evolutionary longevity, i.e. replicators rather than their vehicles. Genome-editing promises a biohappiness revolution, a momentous discontinuity in the evolution of life. The biosphere can be reprogrammed: future life can be animated entirely by information-sensitive gradients of well-being. Therefore both pain-eradication and hedonic recalibration via germline engineering are longtermist – indeed ultra-longtermist – policy options: proponents and bioconservative critics agree on the fateful nature of our choices. If editing our genetic source code is done wisely, then a transhumanist civilisation of superintelligence, superlongevity and superhappiness can underpin the well-being of all sentience indefinitely.
So let’s get it right.

However, some aspects of EA Longtermism in its current guise do concern me.

(1)
 Science does not understand reality. From cosmology to the foundations of quantum mechanics to digital (in)sentience to the Hard Problem of consciousness to the binding problem to normative ethics and meta-ethics, the smartest minds of our civilisation disagree. The conceptual framework of transhumans and posthumans may be unimaginably alien to archaic humans – although in the absence of (at least one end of) a pleasure-pain axis, posthuman life could scarcely matter. Either way, it would be a terrible irony if Longtermists were to influence humanity to make big sacrifices, or just neglect contemporary evils, for a pipedream. After all, Longtermism has unhappy historical precedents. Consider, say, fifteenth-century Spain and the Holy Inquisition. If Grand Inquisitor Tomás de Torquemada’s moral and metaphysical framework were correct, then neglecting worldly ills in favour of saving souls from an eternity of torment in Hell – and from missing out on eternal bliss in Heaven – by inflicting intense short-term suffering would be defensible, maybe even ethically mandatory. Planning for all eternity is as longtermist as it gets. Yet such anguish was all for nothing: scientific rationalists recognise that religious belief in Heaven and Hell rests on spurious metaphysics. Analogously, influential AI researchers, transhumanists and effective altruists today assume that digital computers will somehow “wake up” and support unified subjects of experience, digital “mind uploads” and eventually quintillions of blissful digital supercivilisations. However, IMO the metaphysics of digital sentience is no better supported than an ontology of immortal souls. Conscious Turing machines are a fantasy. If physicalism is true, i.e. no spooky “strong” emergence, then the number of digital supercivilisations with blissful sentient beings will be zero.

Disbelief in the digital sentience assumed by a lot of Longtermist literature doesn’t reflect an arbitrary substrate-chauvinism. If physicalism is true, then a classical Turing machine that’s physically constituted from carbon rather than silicon couldn’t support unified subjects of experience either, regardless of its speed of execution or the complexity of its code. Programmable classical computers and classically parallel connectionist systems promise “narrow” superintelligence, but they can’t solve the phenomenal binding problem. Phenomenal binding is non-classical and non-algorithmic. Even if consciousness is fundamental to the world, as constitutive panpsychists propose, digital computers are zombies – technically, microexperiential zombies – that are no more sentient than a toaster. So it would be tragic if contemporary humans made sacrifices for a future digital paradise that never comes to pass. By the same token, it would be tragic if Longtermist EAs neglected existing evils in the notional interests of a transgalactic civilisation that never materializes because other solar systems are too distant for sentient biological interstellar travel.

Of course, any extended parallel between religious ideologues and ill-judged Longtermism would be unfair. Longtermist EAs have no intention of tormenting anyone to create a digital paradise or colonize the Virgo Supercluster any more than to save their souls. Rather, I think the risk of some versions of Longtermism is distraction: neglect of the interests of real suffering beings and their offspring on Earth today. From ending the horrors of factory farming and wild-animal suffering to genetically phasing out the biology of pain and depression, there are urgent evils that EAs need to tackle now. With effort, imagination and resources, the biology of mental and physical pain can be banished not just in the long-term, but for ever. Compare getting rid of smallpox. For sure, vegan lobbying to end the obscene cruelties of animal agriculture might not sound Longtermist. But humanity isn’t going to reprogram genomes and engineer compassionate ecosystems while we are still systematically harming sentient beings in factory-farms and slaughterhouses. Veganizing the biosphere and a relatively near-term focus on creating a civilisation with a genetically-encoded hedonic range of, say, +10 to +20 doesn’t neglect the interests of a vaster far-future civilisation with a hedonic range of, say, +90 to +100. Rather, engineering the hedonic foothills of post-Darwinian life is a precondition for future glories. Moreover, talk of far-future “generations” may mislead. This millennium, our Darwinian biology of aging is likely to vanish into evolutionary history – and with it, the nature of procreative freedom, sexual reproduction and generational turnover as we understand these concepts today. Indeed, transhumanist focus on defeating the biology of aging – with stopgap cryonics and cryothanasia as a fallback option – will promote long-term thinking if not Longtermism; contemporary humans will care much more about safeguarding the far future if they think they might be around to enjoy it.

(2) Longtermism” means something different within the conceptual scheme of classical and negative utilitarianism. The policy prescriptions of pleasure-maximisers and pain-minimisers may vary accordingly. Likewise with long-term planning in general: background assumptions differ. Irrespective of timescales, if you believe that our overriding moral obligation is to mitigate, minimise and prevent suffering – crudely, LT(NU) – then you will have a different metric of (dis)value than if you give equal moral weight to maximising pleasure – crudely, LT(CU). Effective altruist discussion of Longtermism needs to spell out these differing ethical frameworks – regardless how self-evident such core assumptions may seem to their respective protagonists. For instance, within some neo-Buddhist LT(NU) ethical frameworks, engineering a vacuum phase transition painlessly to end suffering with a “nirvana shockwave” can be conceived as Longtermist (“I teach one thing and one thing only…suffering and the end of suffering” – Gautama Buddha, attrib.) no less than LT(CU) planning for zillions of Omelas. Alternatively, some NUs can (and do!) favour engineering a world of superhuman bliss, just as other things being equal, CUs can (and do) favour the abolition of suffering. But NUs will always “walk away from Omelas”, i.e. avoid pleasure obtained at anyone else’s expense, whereas CUs will permit or inflict suffering – even astronomical amounts of suffering – if the payoff is sufficiently huge. Also, the CU-versus-NU dichotomy I’ve drawn here is an oversimplification. Many passionate life-affirmers are not classical utilitarians. Many suffering-focused ethicists are not negative utilitarians. However, I am a negative utilitarian – a negative utilitarian who favours practical policy prescriptions promoting a world based entirely on gradients of superhuman bliss. So my conception of Longtermism and long-term planning varies accordingly.

Why NU? Doesn’t a NU ethic have disturbingly counterintuitive implications? Forgive me for here just hotlinking why I am a negative utilitarian. I want to add that if you even glimpsed how atrocious suffering can be, then you too would destroy yourself and the world to end it – permanently. And in so doing, you wouldn’t be guilty of somehow overestimating the ghastliness of intense suffering; I’m not going to link specific examples, though perhaps I should do so if anyone here disagrees. Modern physics tells us that reality is a seamless whole: in my view, the universal wavefunction is inconceivably evil. Hundreds of thousands of people do take the path of self-deliverance each year. Millions more try and fail. If humanity opts to conserve the biology of suffering, then with advanced technology maybe some of their pain-ridden twenty-second century counterparts will take the rest of their world down too. And it’s not just suicidal depressives who want to end their nightmarish existence. Insofar as twentieth-first century humanity really stands on the edge of a Precipice, I know morally serious agents willing to administer a vigorous shove.

Most classical utilitarians are unmoved by such pleas to prioritise ending suffering. Life is a marvellous gift to be perpetuated at any price. CUs respond that if you understood how inexpressibly wonderful pleasure could be, then you’d endure – and inflict – fiendish torments to access the sublime (“I would give my whole life for this one instant“, said Prince Myshkin, protagonist of Fyodor Dostoevsky’s 1869 novel “The Idiot”; Dostoevsky had ecstatic seizures.) A similar effect can be induced by speedballing or mainlining heroin (“it’s like kissing God” – Lenny Bruce). Therefore, CUs and NUs have different conceptions of information hazards – and their suppression. EA funders have different conceptions of info-hazards too, although CU backers are immensely wealthier. Sadly, Phil Torres is correct to speak of EAs who have been ”intimidated, silenced, or ‘canceled.‘” But rather than reflecting the moral turpitude of the cancellers or their sponsors, or even the corrupting influence of power and money, such cancellation is reflective of their differing ethical frameworks.
That said, publicity and suppression alike can be morally hazardous.

So what is the best way forward for the effective altruist movement?
I’m not sure. Just as the transhumanist movement has mutated over the past quarter-century, likewise the overlapping effective altruist movement is rapidly changing with the ascendancy of LT(CU). Funding and social-primate power-dynamics play a big role too. But traditional fault-lines aren’t going away. Can the gulf between suffering-focused ethicists and classical utilitarians be bridged in the realm of concrete policy?

Well, on an (very) optimistic note, I wonder if both longtermist and near-termist effective altruists who are NUs and CUs could unite on a “traditional” EA agenda of effectively tackling existing sources of suffering. My reasoning is as follows. Combining socio-economic reform, poverty-reduction, effective giving and so forth with a biological-genetic strategy of germline engineering melds short-, medium- and long-term EA. This concordance is highly suspect – I don’t trust my judgement or motivations here. Yet if, counterfactually, my primary concern were existential risk (“x-risk”) rather [something worse] and suffering-reduction, then reducing existing sources of suffering would still loom large, if not foremost. For one of the most effective ways to reduce x-risk will be to phase out the biology of involuntary suffering and turn everybody into fanatical life-lovers. In a world based entirely on gradients of intelligent well-being, NU and its offshoots could be turned into an affective psychosis of a bygone era – unthinkable pathologies. What’s more, archaic humans who might potentially destroy the world aren’t just depressive NUs, “strong” antinatalistsefilists and Benatarians (etc) – most of whom are marginal figures far removed from the levers of power. From Cold War warriors (cf. “Better Dead Than Red!”) to defeated despots (cf. Hitler’s March 1945 “Nero Decree” which called for the systematic destruction of Germany) many powerful and competitive non-depressive people have a conditionally-activated predisposition to want to bring the world down with them if they fail. Such historical examples could be multiplied; humans now have weapons of mass-destruction to express their apocalyptic impulses. Crudely, uncontrollable suffering is bound up with nihilism, just as happiness is bound up with life-affirmation. X-risk worriers and CU Longtermists should take the biology of suffering very seriously.

What’s more, the organisational vehicle to deliver a stunningly life-affirming vision of global happiness already exists. In its founding constitution, the World Health Organization defines health as complete well-being (“Health is a state of complete physical, mental and social well-being”). The ambition of such a commitment is jaw-dropping. Can the WHO be effectively lobbied by EAs to live up its obligations? I don’t think transhumanists and EAs should be quite so ambitious as the WHO in our conception of health: conserving information-sensitivity is vital. We should aim merely for an architecture of mind based entirely on gradients of well-being. Complete well-being can wait. But if humanity embraces genome reform, then we can come arbitrarily close to the WHO vision of universal well-being via germline editing under a banner of good health for all. Indeed, universal health as defined by the WHO is possible only via genetic engineering. Genome reform is the only longterm(ist) solution to the problem of suffering – short of retiring biological life altogether. Further, the elegance of genetically recalibrating the hedonic treadmill is that hedonic recalibration can potentially be value- and preference-conserving – a critical consideration in winning popular consent. A global health strategy of raising pain-thresholds, hedonic range and hedonic set-points world-wide doesn’t involve adjudicating between logically irreconcilable values and preferences. Recalibration of the hedonic treadmill – as distinct from uniform happiness-maximization or ending suffering via world-annihilation – reflects epistemic humility. Hedonic recalibration can minimise suffering and enhance flourishing while simultaneously keeping all our options open for the future – maybe for a period of long reflection, maybe for an odyssey of psychedelic exploration, who knows? If humanity embraces the abolitionist project – presumably under the auspices of the WHO – then a world without experience below hedonic zero will be safer by the lights of NUs and CUs alike.

Superhuman bliss will be the icing on the cake. Future life may be beautiful, even sublime. But in my view, our greatest obligation to future generations is to ensure they aren’t genetically predestined to suffer like us.


Comment: Here is a serious (and long?) reflection on longtermism by David Pearce of HI fame. My view? I am neither a classical utilitarian (CU) nor a negative utilitarian (NU). Instead, I am waiting for a full mathematically formalized theory of valence (the pleasure-pain axis) before I make up my mind. Indeed, I’m hoping (and to some extent expecting!) that the answer will simply “pop out of the math” (as Michael Johnson likes to say). Then we will probably know. Who knows, perhaps the largest hedonic catastrophes and hedonic glories in the universe might have nothing to do with life.

But, I do also think that the current discourse on longtermism is *overwhelmingly* dominated by CU-style thinking. So this piece is a very important “balancing act”.


Featured image credit: @TilingBot

Leave a Reply