In Praise of Systematic Empathy

Tl;dr: If you are an empathizing utilitarian: Kudos! The world needs more people like you.

You know what? This isn’t about your feelings. A human life, with all its joys and all its pains, adding up over the course of decades, is worth far more than your brain’s feelings of comfort or discomfort with a plan. Does computing the expected utility feel too cold-blooded for your taste? Well, that feeling isn’t even a feather in the scales, when a life is at stake. Just shut up and multiply.

 

Circular Altruism


Empathizing-Systematizing Trade-Offs

The human brain is capable of doing a great deal of things. However, often the tasks it performs are so difficult that full concentration (and sometimes absorption) is necessary to achieve them.

The empathizing-systematizing theory points out that when one is empathizing one’s brain becomes less capable of systematizing. And when one is engaging with problems using a systematizing cognitive style, empathy does not come easy either. In most cases there are trade-offs between these two cognitive styles; they seem to be drawing from the same pool of mental resources.

Even though these traits are negatively correlated with each other, we can still find people around who can quickly switch between these styles.

These rare people, one could argue, have a unique moral responsibility: to harmonize the differences between the systematizing and empathizing mindsets. With great power comes great responsibility.

Doing this is hard, though. It may be inevitable that by trying to find a common ground between these mindsets one will feel morally conflicted:

  1. It is easy to care about those who are around you when you are an empathizer.
  2. It is easy to care about the bulk of all sentient beings when you are a non-empathizing systematizer.
  3. But to be a utilitarian and an empathizer… that’s hard. One needs to ignore, or at least not prioritize, the pain of those around, even though it hurts.

Sample Rationalizations

Example of an empathizer non-systematizer approaching a moral problem: “I love my cat Fluffy. Are you telling me that I should donate to Against Malaria Foundation, a charity that benefits people far, far away, who I have never even met, and whose feelings I can’t perceive… instead of feeding my kitten first-class food? Do you really expect me to trade the warm-fuzzy feelings of having a healthy cat for your cold, abstract logic? No thanks.”

Example of a non-empathizer systematizer approaching a moral problem: “Donating to Against Malaria Foundation instead of having a cat will produce approximately 10,000X Quality-Adjusted Life-Years. Do you really expect me to trade solid, locally sound and perfectly ethical dollars for the supposed wellbeing of a dirty cat? Let’s face it, half of what you care about when you think about your cat is how much he loves you back. Isn’t denying health to tens of children just so you feel loved by a pet incredibly selfish?”

Example of an empathizer systematizer approaching a moral problem the wrong way: “I love Fluffy. I also love all of humanity. Thankfully, everyone can focus on helping the sentient beings that are closest to them. This way we can all be part of a global support network. By helping my cat, and paying for the expensive treatments that he requires to stay healthy, I am doing my own part. Now I just hope everyone else does their part as well.”

Example of an empathizer systematizer approaching the same moral problem, and finally getting it right: “I know that Fluffy needs my love and affection. It pains me to realize that the world is much larger. My feelings tell me ‘just care about those around you, it is to them to whom you have a moral responsibility’. And yet, I cannot pretend that logic and *counting* simply do not matter. It is, in fact, my moral responsibility to ignore my feelings. To sacrifice a few warm-fuzzy human feels for what is, in the end, a much, much bigger sum of warm-fuzzy feels elsewhere.”


The Transhumanist Bodhisattva

Bodhisattvas are mythological entities found in the Mahayana branch of Buddhism. They are great examples of entities that seem to combine both empathizing and systematizing traits, while being motivated by unceasing compassion.

According to buddhist sutras, bodhisattvas are entities who have realized the true nature of suffering (i.e. that it sucks), and achieved a state of mind that manifests as an unshakeable desire to help all sentient beings. And in the wake of their realization, Bodhisattvas have made a vow to dedicate all of their energies to the task of eliminating suffering:

Just as all the previous Sugatas, the Buddhas
Generated the mind of enlightenment
And accomplished all the stages
Of the Bodhisattva training,
So will I too, for the sake of all beings,
Generate the mind of enlightenment
And accomplish all the stages
Of the Bodhisattva training.

– Bodhisattvacaryāvatāra [Translation: Guide to the Bodhisattva’s Way of Life], by Śāntideva

These noble beings intend to help all sentient beings become free from suffering by teaching buddhism.* Today, one might hope, they would choose to focus their energies on the development of biotechnologies of bliss.

David Pearce is what we might call a modern, genomic Bodhisattva. He started a movement called Abolitionism (the bioethical stance that we should use technology to eliminate suffering) and he has spearheaded the compassionate branch of transhumanism.

David is a wonderful human being who, in spite of being naturally predisposed to low hedonic tone (i.e. being genetically predisposed to having a bad day, every day, for no good reason whatsoever), dedicates his entire life to the elimination of suffering. And unlike previous incarnations of that desire, he did do his homework: realize that in this universe, suffering has genetic causes.

Pearce has at times mentioned that we should not think of his vision of abolishing suffering as new or particularly original. He likes to point out that the wish to eliminate suffering is extremely ancient, and we can find it as the core objective of many spiritual and religious traditions. Abolitionism is, as he puts it “just providing the implementation details” of what people have been saying for thousands of years. This framing makes Abolitionism more palatable to the average Joe.

Indeed, boundless compassion has been around for a long time. But the ability to kindle it into effective suffering-reducing actions that may work in the long term is only now beginning to be possible.


Empathy is Marvelous… and Double-Edged

Our ability to track the inner state of beings in our lifeworld (our inner experience, including our representations of others) is a marvelous evolutionary innovation: we are the product of a long machiavellian intelligence arms race in which effective mind-reading could make the difference between being an outcast and becoming the tribal leader. Given the selection pressures of our ancestral tribal environment, it is not surprising that our capacity for empathy is highly selective. Our ability to simulate others’ experiences, thus, is to some extent bound to be inclusive fitness-enhancing rather than, what would be more desirable, sentience-wellness enhancing.

It is tough to care about all sentient beings; specially when the ones around you are suffering and you can’t disengage from simulating what it feels like to be them. One’s predisposition to empathize with sentient being is extremely biased towards the local contexts one lives in, one’s family members (and our extended, genetically similar, tribe), and whatever happens to trigger the feeling that your implicit self-models are threatened by the suffering of others (ex. when charity workers use empathy blackmailing to make you feel miserable about not helping their particular -not necessarily effective- ethical cause).

Sad to say, but a strong involuntary empathetic reaction to other’s suffering is a double-edged sword. On the bright side, it allows you to understand the reality of other’s suffering. And on a case by case basis, it also allows you to figure out how exactly to help them (e.g. highly empathetic and agreeable people are great at figuring out what is bugging you). Unfortunately, one’s empathy for others declines with the amount of people one empathizes with. Some studies show that people are more likely to decide to donate to a cause when there is only one person (or nonhuman animal) victimized… as soon as there are more than a few, one’s empathy becomes overwhelmed and you fail to multiply properly.

Additionally, the attention-grabbing and attention-focusing properties of empathy can have intense network effects that make people over-concerned with relatively minor problems. Likewise, this focusing effect makes people unable to revise their moral judgements. They get stuck with silly deontological rationalizations for their non-optimal actions.

Empathy needs to be debugged. Thankfully, we still have the ability to experience and cultivate compassion, along with systematizing abilities without experiencing burnout. Now, this is certainly not a call to eliminate empathy! But as long as we don’t fix its profound biases, we cannot rely on it to make ethical choices. We can only use it to understand the reality of the suffering of others. And when the time is to act, don’t empathize. Just shut up and multiply.

We need to combine systematizing reasoning with compassion. We don’t need to make emotionally-charged calls to action sparked by individual incidents that affect a specially small number of sentient beings in particularly attention-grabbing ways.

In this day and age (what I think is the beginning of the end of the Darwinian era), we need to temporarily migrate from using empathy as the main source of moral motivation into an ecology of abstract and systematic reasoning guided by compassion. One can do this and still experience the warm fuzzy feelings, but it is harder. One needs to rewire one’s brain a little bit. To tell it “no, I am still doing what is best. Don’t tell me I’m a bad person for preferring to focus on the bulk of sentient beings instead of the few I happen to know.” It is truly moving to realize that you can overcome some of the ways in which evolution set us up for failure.

Until we hack our consciousness to represent the world in an unbiased way, we will have to rely on systematizing cognition to guide our ethical reasoning. Only by combining compassion, empathy and a strong systematizing style, can our minds grasp the enormity of the problem of suffering and why our local solutions are doomed to fail. It removes the wishful thinking that comes with empathy.

If, alternatively, we continue praying to the God of Empathy as our only strategy, we will only reap good local success. But this will be at the cost of failing at the cosmic level, and letting billions of sentient beings feel the sinister coldness of Darwinian life.

All of this is to say: if you are a natural empathizer, I want to let you know that I get your inner struggles. If, in spite of what your feelings tell you, you still choose to do the utilitarian action… I can only sing your praises with my sincere heart. Metta to you, my fellow warrior. We will defeat suffering for all; not just those around us.


* If you buy into the Buddhist ontology (no-self, emptiness, ubiquitous suffering, etc.) then dedicating all of your energies over the period of many eons to teach Buddhism makes a lot of sense. It is only when you think about the exact same problem in light of contemporary science (and the neural underpinnings of suffering) that it becomes clear that the modern bodhisattva would choose to be a transhumanist (and try to eliminate suffering using biotechnology).

David Pearce on “Making Sentience Great”

We need measures of intelligence richer than today’s simple-minded autistic IQ/AQ tests. In principle, we could breed super-intelligent humans like strains of smart mice.

 

If I were running the program, I’d use cloning with variations. Start with the DNA of promising candidates, especially Ashkenazi Jews. (John von Neumann, for instance, was buried, not cremated).

 

Using the new tools of CRISPR-based synthetic biology, splice in genes for depression-resistance and perhaps hyper-empathy. Develop and optimize artificial wombs to foster bigger and better embryonic brains; traditional biological pregnancies involve ferocious genetic conflict between mother and embryo, whereas in the future the creation of new life can be geared to the well-being of the unborn child. You can then hothouse the products in an optimally enriched environment. And then clone (with variations) the most promising candidates. No need to wait a whole generation; if a kid wins a Fields Medal aged nine, then clone again with further genetic tweaking. Super-Shulgin academies would have pride of place, together with the EA bioethics department. Spin off a financial services and innovation division so the project becomes self-financing.

 

Recursive genetic self-improvement could in principle be sustained indefinitely, presumably with an increasing degree of “cyborgisation”: not even an unenriched super-von-Neumann could match the serial depth of processing of a digital computer, but with “narrow AI” routinely implanted on web-enabled neurochips, no matter. The demise of aging and rapid growth of genetic self-editing software would presumably make talk of “generations” in the traditional Darwinian sense increasingly archaic

 

Can we foresee any ethical pitfalls? One or two; but if the raison d’être of the project were to promote the well-being of all sentience in our forward light-cone, would you decline the offer of an initial billion-dollar grant?

 

– David Pearce, answering the question “How would you create a super-intelligence?” (source: Facebook)

Philosophy of Mind Diagrams

“The penalty of not doing philosophy isn’t to transcend it, but simply to give bad philosophical arguments a free pass.”

-David Pearce

 

These are some slides I created when I presented at the Advancing Humanity Symposium hosted by the Stanford Transhumanist Association in 2013.

My presentation was an introduction to the topic of Mind Uploading. I decided to address the various ontologies that are postulated in philosophy of mind and personal identity. I did this so that I could then explore how each possible combination affects the plausibility, and desirability, of Mind Uploading.

In brief, as you can see in the table above, both personal identity views and mind-body problem solutions influence how you judge Mind Uploading.

For example, an Empty Individualist lacks the belief in an enduring metaphysical self over time. Thus, she would consider her digitized mind (also referred to as EM) to be just as alien to her true identity as the person who will wake up with her memories tomorrow anyway.

Likewise, Open Individualist don’t particularly care about the preservation of memories and personality. What matters to them is different. For example, they may care a lot about making as many sentient beings as happy as possible, rather than caring about which bodies get to produce which experiences.

But if Closed Individualism is true, then Mind Uploading is an ethical imperative! Truly believing that who you are starts existing when you are born and stops existing when you die implies that not extending one’s existence indefinitely results in the complete annihilation of a metaphysical being. And that would be sad, wouldn’t it?

When it comes to the Mind-Body problem, the specifics also influence the plausibility (and desirability) of Mind Uploading oneself. For example, if dualism is true, one might have to do a lot more research for successful mind-uploading. One would need, for instance, to decipher the causal and practical constraints imposed by the consciousness side of the metaphysical equation. If, on the other hand, monistic materialism is true, then Mind Uploading should be as simple as creating a sufficiently detailed brain emulation.

And that is not even scratching the surface. In reality, the state-space of both personal identity views and mind-body solutions is much larger than we can conceive. Who knows what kind of ontologies our posthuman descendants will choose to experience on a daily basis…

Ontological Runaway Scenario

Moral innovation

Imagine that you iterate over and add the option of recursing further or just stopping next time. Every time you recurse, more and more people will have to choose not recursing if the number of dilemmas is going to ever stop growing.

If the fraction of people who choose to recurse is higher than the number of dilemmas that are kick-started by your own choice, then this can lead to a moral runaway scenario.

Life Runaway

I’m sure you can tempt people who are explicit classical utilitarian with this schema.

All you need to do is also add the condition that people in the train are having fun. And suddenly, you have a scenario in which you are contemplating the creation of arbitrarily large hedonic pipelines of positive and negative pathways. Supposedly the positive and negative qualities are canceling each other out. Does this sound fun? Does it sound like Jinjang?

Time-Beyond-Time

People with stranger ontologies may also be tempted by the scenario. Think of this concept: “In the ocean of being, all of the mistakes are forgiven. Only the learning remains.” In this scenario, our own reality implies and thus enables the existence of an orthogonal time to ours. And this orthogonal time also implies and enables a further time-beyond-time-beyond-time reality. This is now an ontological runaway scenario.

But if you have a stacked moral and ontological runaway scenario, you can perhaps choose to perpetuate life, with all of its suffering and bad qualities (in addition to the bliss and love in it), for ethical reasons. It is, after all, “the ocean of being” that learns, and if it does not learn in this reality, it will not be able to pass on useful information to the next layer of reality. The first orthogonal time to ours will lack the information that it requires to prevent suffering from its point of view. “The only reason God would have created this universe is so as to prevent an even bigger evil.”