Why Care About Meme Hazards and Thoughts on How to Handle Them

By Justin Shovelain and Andrés Gómez Emilsson

Definition

Nick Bostrom defines an “Information Hazard” as: “A risk that arises from the dissemination or the potential dissemination of (true) information that may cause harm or enable some agent to cause harm.” A more general category is that of “Memetic Hazard”, which is not restricted to the potential harms of true information. False claims and mistaken beliefs can also produce harm, and should thus also be considered in any ethically-motivated policy for information dissemination. 

Introduction

Perhaps one of the best known analysis of meme hazards is the work of Nick Bostrom concerning: Information Hazards, the Unilateralist’s Curse, and Singletons. His focus could roughly be described as one of classifying the types of situations that can give rise to information hazards. A parallel set of problems to that of categorizing memetic hazards is the problem of coming up with policies for dealing with them, and the problem of convincing people that they should care. In this post we suggest some basic heuristics for dealing with meme hazards, and explain why you should care about them even when your work seems unambiguously positive.

Motivation

Why You Should Care

A big problem with getting people to engage with any kind of memetic hazard policy is that it may be perceived as a voluntary constraint on one’s behavior with little to no personal benefit. Nobody (well, at least nobody we know*) gets excited about compliance training at a new job, or inspection day at a manufacturing facility. Subjectively, most people perceive compliance and oversight as something that gets in the way of doing one’s work and as a hassle for one’s organization. That said, there is reason to believe that as the world’s technologies become both more powerful and more widely accessible, that there will be increasingly more dangerous information around. Considering the possible downsides of sharing information will thus become increasingly more important. So at least on a global scale, it will be increasingly more important for people to consider the impact of the information they choose to share. But at an individual level, why would they care about meme hazards policies and not think of them as a bothersome constraint?

Just like there are actions that can help or harm there are ideas that can help or harm. Furthermore, some ideas produce their primary good or bad effect through social transmission, which we can call memes. There are several ways to prevent the harm from memes: not producing them in the first place, not sharing them, or fixing the situation so that when dispersed they do not do damage (before or after dispersal). Let’s call policies to prevent harm from meme hazards, meme hazard policies. Because in a world with increasingly accessible technological power a lot of our largest effects are likely to be produced by memetic hazards, a good way to improve the chances of achieving one’s goals is to tilt things as much as possible towards our goals with good meme hazard policies. It thus makes sense to read works about meme hazard policy and to think about how it bears on one’s work. This way you can improve your implementation and design of meme hazard policies to avoid hampering your own goals. In particular, assuming that you are a rational agent (who both attempts to be epistemically and instrumentally rational) you will generally find that spreading dangerous information that causes large negative effects (even if by accident!) will interfere with your ability to carry out your own goals.

Why Good Work May Have Bad Net Effects

When one engages in very novel research one should be careful to consider the ratio with which one’s work advances desired outcomes relative to undesired outcomes. This may yield surprising results for the net effect of one’s work, sometimes flipping the net effect of research that at first may have seemed unambiguously good. For example, Artificial Intelligence Alignment research may in principle increase the chances of unaligned AI by virtue of providing insights into how to build powerful AIs in general. If it is 100 times harder to build an aligned AI than an unaligned AI, and researching AI alignment advances the goal of building unaligned AIs by more than 1/100 relative to how it advances building aligned AIs, then such research would (counter-intuitively) increase the chances of building unaligned AIs relative to aligned AIs.

As another example of how seemingly good work may have bad net effects let’s consider how information mutates in a social network. As discussed in previous articles such as consciousness vs. replicators there is no universal reason why causing large effects and causing good effects have to be correlated (see also: Basic AI Drives and Spreading happiness more difficult than just spreading). With an evolutionary view, it becomes clear that memes that are good and beneficial to everyone can eventually evolve to become bad and harmful to everyone if by doing so they gain a reproductive edge. As a rule of thumb, you can expect ideas to mutate towards:

    1. Noise due to generation loss
      1. Unless your copying method is perfect or has error correction methods, every time you make a copy of something the information will degrade to some extent. This is called generation loss and it leads to more noisy copies over time.
    2. Simplicity
      1. Since information transmission incurs a cost, simpler mutations of the meme have a reproductive edge.
    3. Ease of memorization and communication
      1. Mutations to the memes that are easier to memorize and communicate are more likely to spread.
    4. Inciting arms races
      1. If the meme provides a competitive edge in a zero-sum game, it may give rise to an arms race between agents who engage in such zero-sum game. For example, a new marketing method discovered by a given agency would force other marketing agencies to invest in researching how to achieve the same results. Since the rate of evolution of a meme is partly determined by the rate at which iterations over it are performed, a lot of memetic evolution takes place in arms races.
    5. Saliency (cognitive, emotional, perceptual, etc.)
      1. Saliency refers to the probability of noticing a given stimuli. Memes that mutate in a way that makes them more noticeable have a reproductive edge. Thus, many memes may over time acquire salient features, such as causing strong emotions.
    6. Uses for social signaling (such as used for signaling intelligence, knowledge, social network, local usefulness, etc.)
      1. Consider the difference between manufacturing a car that focuses exclusively on basic functionality and a car that in addition also signals wealth. Perhaps it would be better if everyone bought the first kind of car because the second kind incites the urge in others to get a new car more often than necessary. Namely, people might want to buy a new car whenever the neighbors have upgraded to a more luxurious car (see: Avoid Runaway Signaling in Effective Altruism and Keeping up with the Joneses).
    7. Overselling
      1. As a general heuristic, memes will spread faster when they are presented as better than they really are. Unless there is a feedback mechanism that allows people to know the true value of a meme, those that can oversell themselves will tend to be more common relative to those that are honest about the value they provide.
    8. Usefulness
      1. The usefulness of a meme increases the chances that it will be passed on.

Given considerations like the above, it’s clear that in order to achieve what we want we need to  think carefully about the possible impacts of our research and efforts, even when they seem unambiguously positive. Now, when should one give special thought to memetic hazard policies?

When Should You Care the Most?

meme_hazard_action_space

Meme Hazard Action Space – Worry when the ideas are both novel and have the potential to have large effects

There are two key features of potential memetic hazards that should be taken into account when thinking about whether to pursue the research that is bringing them to life. 

The first one is how large their effects may be, and the second is how novel they are. How large an effect is depends on factors such as how many people it may affect, how intense the effects would be on each person affected, how long the effects would last, and so on. How novel a meme is depends on factors like how many people know about it, how much specialized knowledge you require to arrive at it, how counter-intuitive it is, and so on.

No matter how novel a piece of information may be, if it does not have the potential to cause large effects we can disregard it in the context of a meme hazard policy. When the potential to cause large effects is there but the idea is not very novel, then one should focus on actions to mitigate risks. For instance, if everyone knows how to build nuclear bombs, then the real bottleneck to focus as a matter of policy would be on things like the accessibility to rare or expensive materials needed to build such bombs.

But when the information is both novel and can cause large effects, then the appropriate focus is that of a meme hazard policy based on strategies to handle information dissemination.

Examples

Ignore:

  • What you had for breakfast, yet another number sorting algorithm, how to get the hair of a cat to be more fluffy

Focus on ideas:

  • A more efficient deep learning technique, a chemical to improve exercise response efficiency, a new rationality technique, information on where the world’s biggest tree is

Focus on actions:

  • The idea of guns, the idea of washing hands for sanitary purposes, running an Ayahuasca retreat in the amazon

Suggested Heuristics

yes_no_diagram_3

Suggested Responses

To wrap up, here we provide a very high-level set of suggested heuristics to consider if one is indeed discovering ideas that are both very novel and capable of producing large effects:

  • Develop
    • Develop if you conclude that there is no risk
  • Share
    • Share if you conclude that there is no risk
  • Log your analysis and proceed
    • Store the results of your analysis for future use by others who may overlook the risks and then continue developing or sharing it
  • Think more about it
    • Conclude that it would be valuable to analyze the risks of the meme (e.g. a new technology) further
  • Develop cure
    • Develop a cure of the meme hazard’s downsides
    • This approach may entail selectively sharing the information with people who are highly benevolent, good at keeping secrets, and capable in the relevant domains of expertise
  • Improve the groups that receive it so that it is safe
    • Some information is only risky if certain types of groups get it, so if you change the nature of the groups then there is no risk
  • Framing it so it goes to the right people or only yields good effects
    • The way an idea is posed or framed determines a fair amount of who will read it and how they will act on it
  • Selecting a safe subset to share
    • When you have information it could be that some parts are good or safe to share and you can selectively share those parts
    • Make sure those parts are not sufficient to reconstruct the original (unsafe) information
  • Selecting a safe subset to develop
    • When developing some information it can be that some parts are good or safe to develop and you can selectively develop those parts
  • Selectively share to a subset of people
    • Some information is only risky if certain types of groups get it; if you can aim where the information goes you can avoid the risk
    • Report the information to proper authorities
  • Don’t develop
    • Some information is too risky to develop
  • Don’t share
    • Some information is too risky to share
  • Monitor to see if others move towards developing or sharing it
    • If you’ve identified something risky it may make sense to see if others are developing it or likely to share it so that you can warn them, focus on building a cure, contact authorities, or start changing your actions knowing that a disaster is likely. 
  • Try to decrease the likelihood of rediscovery
    • If it’s really risky you may want to see what you can do about decreasing the likelihood that it is rediscovered

Conclusion

In this post we discussed why you should consider following heuristics to deal with meme hazards as an important part of achieving your goals rather than as a chore or hassle. We also discussed how work that may seem unambiguously good may turn out to have negative effects. In particular, we mentioned the “ratio argument” and also brought up some evolutionary considerations (where memes may mutate in unhelpful ways to have a reproductive edge). We then considered when one should be especially cautious about meme hazards: when the information is both highly novel and capable of producing large effects. And finally, we provided a list of heuristics to consider when faced with novel information capable of producing large effects.

In the future we hope to weave these heuristics into a more complete meme hazard policy for researchers and decision makers working at the cutting edge.


*After posting this article someone contacted us to point out that they in fact love compliance training. This person was very persistent about updating this post with that fact.

2 comments

  1. gordianus · April 1, 2021

    The usefulness of a meme increases the chances that it will be passed on.

    In some cases the opposite is true. If you’re trying to solve some problem with the help of a meme, then a meme that quickly and simply solves the problem will require you to think about it for a shorter time, and so possibly be less likely for you to share, than a meme that isn’t solving the problem but seems to you like it is, so that you keep focusing on it in hopes of using it to finally solve the problem. This is particularly true in politics/activism and other areas where plausibly useful strategies include ‘spreading awareness’ and other excuses for transmitting memes, so that thinking the problem is still a serious problem makes you more likely to think spreading the meme is important. See Lou Keep’s review of Hoffer’s The True Believer for a more detailed discussion of how this happens in politics.

  2. Pingback: Every Qualia Computing Article Ever | Qualia Computing

Leave a Reply