Tl;dr: If you are an empathizing utilitarian: Kudos! The world needs more people like you.
You know what? This isn’t about your feelings. A human life, with all its joys and all its pains, adding up over the course of decades, is worth far more than your brain’s feelings of comfort or discomfort with a plan. Does computing the expected utility feel too cold-blooded for your taste? Well, that feeling isn’t even a feather in the scales, when a life is at stake. Just shut up and multiply.
Empathizing-Systematizing Trade-Offs
The human brain is capable of doing a great deal of things. However, often the tasks it performs are so difficult that full concentration (and sometimes absorption) is necessary to achieve them.
The empathizing-systematizing theory points out that when one is empathizing one’s brain becomes less capable of systematizing. And when one is engaging with problems using a systematizing cognitive style, empathy does not come easy either. In most cases there are trade-offs between these two cognitive styles; they seem to be drawing from the same pool of mental resources.
Even though these traits are negatively correlated with each other, we can still find people around who can quickly switch between these styles.
These rare people, one could argue, have a unique moral responsibility: to harmonize the differences between the systematizing and empathizing mindsets. With great power comes great responsibility.
Doing this is hard, though. It may be inevitable that by trying to find a common ground between these mindsets one will feel morally conflicted:
- It is easy to care about those who are around you when you are an empathizer.
- It is easy to care about the bulk of all sentient beings when you are a non-empathizing systematizer.
- But to be a utilitarian and an empathizer… that’s hard. One needs to ignore, or at least not prioritize, the pain of those around, even though it hurts.
Sample Rationalizations
Example of an empathizer non-systematizer approaching a moral problem: “I love my cat Fluffy. Are you telling me that I should donate to Against Malaria Foundation, a charity that benefits people far, far away, who I have never even met, and whose feelings I can’t perceive… instead of feeding my kitten first-class food? Do you really expect me to trade the warm-fuzzy feelings of having a healthy cat for your cold, abstract logic? No thanks.”
Example of a non-empathizer systematizer approaching a moral problem: “Donating to Against Malaria Foundation instead of having a cat will produce approximately 10,000X Quality-Adjusted Life-Years. Do you really expect me to trade solid, locally sound and perfectly ethical dollars for the supposed wellbeing of a dirty cat? Let’s face it, half of what you care about when you think about your cat is how much he loves you back. Isn’t denying health to tens of children just so you feel loved by a pet incredibly selfish?”
Example of an empathizer systematizer approaching a moral problem the wrong way: “I love Fluffy. I also love all of humanity. Thankfully, everyone can focus on helping the sentient beings that are closest to them. This way we can all be part of a global support network. By helping my cat, and paying for the expensive treatments that he requires to stay healthy, I am doing my own part. Now I just hope everyone else does their part as well.”
Example of an empathizer systematizer approaching the same moral problem, and finally getting it right: “I know that Fluffy needs my love and affection. It pains me to realize that the world is much larger. My feelings tell me ‘just care about those around you, it is to them to whom you have a moral responsibility’. And yet, I cannot pretend that logic and *counting* simply do not matter. It is, in fact, my moral responsibility to ignore my feelings. To sacrifice a few warm-fuzzy human feels for what is, in the end, a much, much bigger sum of warm-fuzzy feels elsewhere.”
The Transhumanist Bodhisattva
Bodhisattvas are mythological entities found in the Mahayana branch of Buddhism. They are great examples of entities that seem to combine both empathizing and systematizing traits, while being motivated by unceasing compassion.
According to buddhist sutras, bodhisattvas are entities who have realized the true nature of suffering (i.e. that it sucks), and achieved a state of mind that manifests as an unshakeable desire to help all sentient beings. And in the wake of their realization, Bodhisattvas have made a vow to dedicate all of their energies to the task of eliminating suffering:
Just as all the previous Sugatas, the Buddhas
Generated the mind of enlightenment
And accomplished all the stages
Of the Bodhisattva training,
So will I too, for the sake of all beings,
Generate the mind of enlightenment
And accomplish all the stages
Of the Bodhisattva training.– Bodhisattvacaryāvatāra [Translation: Guide to the Bodhisattva’s Way of Life], by Śāntideva
These noble beings intend to help all sentient beings become free from suffering by teaching buddhism.* Today, one might hope, they would choose to focus their energies on the development of biotechnologies of bliss.
David Pearce is what we might call a modern, genomic Bodhisattva. He started a movement called Abolitionism (the bioethical stance that we should use technology to eliminate suffering) and he has spearheaded the compassionate branch of transhumanism.
David is a wonderful human being who, in spite of being naturally predisposed to low hedonic tone (i.e. being genetically predisposed to having a bad day, every day, for no good reason whatsoever), dedicates his entire life to the elimination of suffering. And unlike previous incarnations of that desire, he did do his homework: realize that in this universe, suffering has genetic causes.
Pearce has at times mentioned that we should not think of his vision of abolishing suffering as new or particularly original. He likes to point out that the wish to eliminate suffering is extremely ancient, and we can find it as the core objective of many spiritual and religious traditions. Abolitionism is, as he puts it “just providing the implementation details” of what people have been saying for thousands of years. This framing makes Abolitionism more palatable to the average Joe.
Indeed, boundless compassion has been around for a long time. But the ability to kindle it into effective suffering-reducing actions that may work in the long term is only now beginning to be possible.
Empathy is Marvelous… and Double-Edged
Our ability to track the inner state of beings in our lifeworld (our inner experience, including our representations of others) is a marvelous evolutionary innovation: we are the product of a long machiavellian intelligence arms race in which effective mind-reading could make the difference between being an outcast and becoming the tribal leader. Given the selection pressures of our ancestral tribal environment, it is not surprising that our capacity for empathy is highly selective. Our ability to simulate others’ experiences, thus, is to some extent bound to be inclusive fitness-enhancing rather than, what would be more desirable, sentience-wellness enhancing.
It is tough to care about all sentient beings; specially when the ones around you are suffering and you can’t disengage from simulating what it feels like to be them. One’s predisposition to empathize with sentient being is extremely biased towards the local contexts one lives in, one’s family members (and our extended, genetically similar, tribe), and whatever happens to trigger the feeling that your implicit self-models are threatened by the suffering of others (ex. when charity workers use empathy blackmailing to make you feel miserable about not helping their particular -not necessarily effective- ethical cause).
Sad to say, but a strong involuntary empathetic reaction to other’s suffering is a double-edged sword. On the bright side, it allows you to understand the reality of other’s suffering. And on a case by case basis, it also allows you to figure out how exactly to help them (e.g. highly empathetic and agreeable people are great at figuring out what is bugging you). Unfortunately, one’s empathy for others declines with the amount of people one empathizes with. Some studies show that people are more likely to decide to donate to a cause when there is only one person (or nonhuman animal) victimized… as soon as there are more than a few, one’s empathy becomes overwhelmed and you fail to multiply properly.
Additionally, the attention-grabbing and attention-focusing properties of empathy can have intense network effects that make people over-concerned with relatively minor problems. Likewise, this focusing effect makes people unable to revise their moral judgements. They get stuck with silly deontological rationalizations for their non-optimal actions.
Empathy needs to be debugged. Thankfully, we still have the ability to experience and cultivate compassion, along with systematizing abilities without experiencing burnout. Now, this is certainly not a call to eliminate empathy! But as long as we don’t fix its profound biases, we cannot rely on it to make ethical choices. We can only use it to understand the reality of the suffering of others. And when the time is to act, don’t empathize. Just shut up and multiply.
We need to combine systematizing reasoning with compassion. We don’t need to make emotionally-charged calls to action sparked by individual incidents that affect a specially small number of sentient beings in particularly attention-grabbing ways.
In this day and age (what I think is the beginning of the end of the Darwinian era), we need to temporarily migrate from using empathy as the main source of moral motivation into an ecology of abstract and systematic reasoning guided by compassion. One can do this and still experience the warm fuzzy feelings, but it is harder. One needs to rewire one’s brain a little bit. To tell it “no, I am still doing what is best. Don’t tell me I’m a bad person for preferring to focus on the bulk of sentient beings instead of the few I happen to know.” It is truly moving to realize that you can overcome some of the ways in which evolution set us up for failure.
Until we hack our consciousness to represent the world in an unbiased way, we will have to rely on systematizing cognition to guide our ethical reasoning. Only by combining compassion, empathy and a strong systematizing style, can our minds grasp the enormity of the problem of suffering and why our local solutions are doomed to fail. It removes the wishful thinking that comes with empathy.
If, alternatively, we continue praying to the God of Empathy as our only strategy, we will only reap good local success. But this will be at the cost of failing at the cosmic level, and letting billions of sentient beings feel the sinister coldness of Darwinian life.
All of this is to say: if you are a natural empathizer, I want to let you know that I get your inner struggles. If, in spite of what your feelings tell you, you still choose to do the utilitarian action… I can only sing your praises with my sincere heart. Metta to you, my fellow warrior. We will defeat suffering for all; not just those around us.
* If you buy into the Buddhist ontology (no-self, emptiness, ubiquitous suffering, etc.) then dedicating all of your energies over the period of many eons to teach Buddhism makes a lot of sense. It is only when you think about the exact same problem in light of contemporary science (and the neural underpinnings of suffering) that it becomes clear that the modern bodhisattva would choose to be a transhumanist (and try to eliminate suffering using biotechnology).
3 comments