Above: “Virtue Signaling” by Geoffrey Miller. This presentation was given at EAGlobal 2016 at the Berkeley campus.
For a good introduction to the EA movement, we suggest this amazing essay written by Scott Alexander from SlateStarCodex, which talks about his experience at EAGlobal 2017 in San Francisco (note: we were there too, and the essay briefly discusses our encounter with him).
We have previously discussed why valence research is so important to EA. In brief, we argue that in order to minimize suffering we need to actually unpack what it means for an experience to have low valence (ie. to feel bad). Unfortunately, modern affective neuroscience does not have a full answer to this question, but we believe that the approach that we use- at the Qualia Research Institute- has the potential to actually uncover the underlying equation for valence. We deeply support the EA cause and we think that it can only benefit from foundational consciousness research.
We’ve already covered some of the work by Geoffrey Miller (see this, this, and this). His sexual selection framework for understanding psychological traits is highly illuminating, and we believe that it will, ultimately, be a crucial piece of the puzzle of valence as well.
We think that in this video Geoffrey is making some key points about how society may perceive EAs which are very important to keep in mind as the movement grows. Here is a partial transcript of the video that we think anyone interested in EA should read (it covers 11:19-20:03):
So, I’m gonna run through the different traits that I think are the most relevant to EA issues. One is low intelligence versus high intelligence. This is a remarkably high intelligence crowd. And that’s good in lots of ways. Like you can analyze complex things better. A problem comes when you try to communicate findings to the people in the middle of the bell curve or even to the lower end. Those folks are the ones who are susceptible to buying books like “Homeopathic Care for Cats and Dogs” which is not evidence-based (your cat will die). Or giving to “Guide Dogs for the Blind”. And if you think “I’m going to explain my ethical system through Bayesian rationality” you might impress people, you might signal high IQ, but you might not convince them.
I think there is a particular danger of “runaway IQ-signaling” in EA. I’m relatively new to EA, I’m totally on board with what this community is doing, I think it’s awesome, it’s terrific… I’m very concerned that it doesn’t go the same path I’ve seen many other fields go, which is: when you have bright people, they start competing for status on the basis of brightness, rather than on the basis of actual contributions to the field.
So if you have elitist credentialism, like if your first question is “where did you go to school?”. Or “I take more Provigil than you, so I’m on a nootropics arms race”. Or you have exclusionary jargon that nobody can understand without Googling it. Or you’re skeptical about everything equally, because skepticism seems like a high IQ thing to do. Or you fetishize counter-intuitive arguments and results. These are problems. If your idea of a Trolley Problem involves twelve different tracks, then you’re probably IQ signaling.
A key Big Five personality trait to worry about, or to think about consciously, is openness to experience. Low openness tends to be associated with drinking alcohol, voting Trump, giving to ineffective charities, standing for traditional family values, and being sexually inhibited. High openness to experience tends to be associated with, well, “I take psychedelics”, or “I’m libertarian”, or “I give to SCI”, or “I’m polyamorous”, or “casual sex is awesome”.
Now, it’s weird that all these things come in a package (left), and that all these things come in a package (right), but that empirically seems to be the case.
Now, one issue here is that high openness is great- I’m highly open, and most of you guys are too- but what we don’t want to do is, try to sell people all the package and say “you can’t be EA unless you are politically liberal”, or “unless you are a Globalist”, or “unless you support unlimited immigration”, or “unless you support BDSM”, or “transhumanism”, or whatever… right, you can get into runaway openness signaling like the Social Justice Warriors do, and that can be quite counter-productive in terms of how your field operates and how it appears to others. If you are using rhetoric that just reactively disses all of these things [low openness attributes], be aware that you will alienate a lot of people with low openness. And you will alienate a lot of conservative business folks who have a lot of money who could be helpful.
Another trait is agreeableness. Kind of… kindness, and empathy, and sympathy. So low agreeableness- and this is the trait with the biggest sex difference on average, men are lower on agreeableness than women. Why? Because we did a bit more hunting, and stabbing each other, and eating meat. And high A tends to be more “cuddle parties”, and “voting for Clinton”, and “eating Tofu”, and “affirmative consent rather than Fifty Shades”.
EA is a little bit weird because this community, from my observations, combines certain elements of high agreeableness- obviously, you guys care passionately about sentient welfare across enormous spans of time and space. But it also tends to come across, potentially, as low agreeableness, and that could be a problem. If you analyze ethical and welfare problems using just cold rationality, or you emphasize rationality- because you are mostly IQ signaling- it comes across to everyone outside EA as low agreeableness. As borderline sociopathic. Because traditional ethics and morality, and charity, is about warm heartedness, not about actually analyzing problems. So just be aware: this is a key personality trait that we have to be really careful about how we signal it.
High agreeableness tends to be things like traditional charity, where you have a deontological perspective, sacred moral rules, sentimental anecdotes, “we’re helping people with this well on Africa that spins around, children push on it, awesome… whatever”. You focus on vulnerable cuteness, like charismatic megaphone if you are doing animal welfare. You focus on in-group loyalty, like “let’s help Americans before we help Africa”. That’s not very effective, but it’s highly compelling… emotionally… to most people, as a signal. And the stuff that EA tends to do, all of this: facing tough trade-offs, doing expected utility calculations, focusing on abstract sentience rather than cuteness… that can come across as quite cold-hearted.
EA so far, in my view- I haven’t run personality questionnaires on all of you, but my impression is- it tends to attract a fairly narrow range of cognitive and personality types. Obviously high IQ, probably the upper 5% of the bell curve. Very high openness, I doubt there are many Trump supporters here. I don’t know. Probably not. [Audience member: “raise your hands”. Laughs. Someone raises hands]. Uh oh, a lynching on the Berkeley campus. And in a way there might be a little bit of low agreeableness, combined with abstract concern for sentient welfare. It takes a certain kind of lack of agreeableness to even think in complex rational ways about welfare. And of course there is a fairly high proportion of nerds and geeks- i.e. Asperger’s syndrome- me as much as anybody else out here, with a focus on what Simon Baron-Cohen calls “systematizing” over “empathizing”. So if you think systematically, and you like making lists, and doing rational expected value calculations, that tends to be a kind of Aspie way to approaching things. The result is, if you make systematizing arguments, you will come across as Aspie, and that can be good or bad depending on the social context. If you do a hard-headed, or cold-hearted analysis of suffering, that also tends to signal so-called dark triad traits-narcissism, Machiavellianism, and sociopathy- and I know this is a problem socially, and sexually, for some EAs that I know! That they come across to others as narcissistic, Machiavellian, or sociopathic, even though they are actually doing more good in the world than the high agreeableness folks.
[Thus] I think virtue signaling helps explain why EA is prone to runaway signaling of intelligence and openness. So if you include a lot more math than you really strictly need to, or more intricate arguments, or more mind-bending counterfactuals, that might be more about signaling your own IQ than solving relevant problems. I think it can also explain, according to the last few slides, why EA concerns about tractability, globalism, and problem neglectedness can seem so weird, cold, and unappealing to many people.