How can we prevent online harassment and other unruly behavior? More generally, what can the public do for a safer, more understanding internet– independently from tech companies?

Today, I’m happy to announce two new studies that help answer these questions. The first, published by the Proceedings of the National Academy of Science, is a large-scale citizen science study I conducted in 2016 with reddit’s science discussion community, which had 13.5 million subscribers at the time. The second is an in-depth ethnography with volunteer moderators on reddit, to understand the forces that shape their role as organizers and policymakers for millions of people online.

Preventing Harassment and Other Unruly Behavior in Massive Science Discussions

How do a community’s rules about behavior influence people’s decisions to participate in an online conversation and how they subsequently behave? These questions matter deeply to online communities, and they’re also important to fundamental questions about social norms and human behavior.

Testing Ideas Of Behavior Change Online

Like many communities, moderators of r/science enforce rules for the kinds of comments they allow. Practically, people often wonder if having clear rules against harassment could cause rather than prevent problems.

Some people don’t believe that people who harass others online could change their behavior. Others worry if strict rules might have a chilling effect on participation. Yet interventions that influence social norms sometimes do influence behavior. And while some policies do have chilling effects, a small number of studies found that people say they’re more likely express themselves online in the presence of clear policies against harassment and evidence of moderation. Does that translate to behavior change? We wanted to find out.

We also asked scientific questions about how groups form, how social norms develop, and situations where norm information is influential. Social norms, as described by psychologists, are our beliefs about what other people consider acceptable. When forming those beliefs, we pay more attention to some people or groups– what psychologists call “referents.” Scientists expect that we are more influenced by information (like rule postings) from groups we are closer to. What might we expect on reddit, where people flow from community to community at the whims of popularity algorithms?

In the r/science community, which had 13.5m subscribers in September 2016, a team of ~1200 volunteers reviewed tens of thousands of comments every week. In this large community, even small changes in the rate of unruly behavior would be meaningful. Across 29 days, we tested the effect of rule-postings on the behavior of first-time commenters. We posted sticky comments to the top of some discussion threads and not to others. Moderators adjusted their community’s code to keep themselves unaware of which discussions received the rules. To measure the outcomes, the community invited the CivilServant software to record (a) comments by newcomers and (b) whether a comment was removed by moderators for violating the rules.

The experiment, which included 2190 posts about journal articles, involved 62,457 comments overall and 18,264 newcomer comments. In the previous month, roughly 52% of all newcomer comments were removed by moderators. The median comment count for allowed posts was 2, and the mean was 36. Together, we found that posting the rules:

  • increased newcomer of comments by 70% on average
  • increased the chance that a newcomer comment would follow the rules by 8 percentage points on average
  • since more newcomers added comments, the total number of removed newcomer comments may also have increased

More about the Study

If you’re interested in the full analysis, you can read the analysis plan and the Supplementary Information, which reports the pre-registered findings (very similar to the final results) and describes additional work to improve the quality of the final analysis. The study was reviewed by community moderators and by the MIT ethics board. I also debriefed the community about the study after it finished.

Today’s paper has three important limitations. Since our software could only observe accounts that made comments, it’s not clear whether the difference in unruly behavior is caused by changes in who chose to comment or changes in individual behavior. Second, this study looks at all removed comments rather than focusing on any one kind of unruly behavior. Finally,

I am grateful to the moderators of r/science for developing this study with me, especially Nathan Allen and Piper Below, who managed the relationship, and William Budnick, who offered feedback on the modeling approach. Merry Mou co-developed the research software that made it possible. I am also grateful to my dissertation committee of Ethan Zuckerman, Tarleton Gillespie, and Elizabeth Levy Paluck, for thoughtful and ongoing feedback on this paper.

Will This Work Anywhere Else?

While we have strong evidence on the effect of rule-postings in r/science, different communities might experience different results. I created CivilServant to make it easy for communities to do your own experiments rather than take our word for it. If your reddit community is interested to replicate this study, read more about how to get involved here:

What Does it Mean for Volunteers to Govern our Online Lives?

Journalists often see governments and companies as the primary parties responsible for governing online behavior. Throughout the history of the internet, volunteers have actually carried the greater burden of managing social relations online. How do people take on those roles, and what forces shape the meaning of their work?

Life of a mod, by /u/solidwhetstone (used with permission)

In a new ethnography with reddit moderators, how they gain their positions, and the pressures they face, I try to harmonize three competing ideas about volunteer moderation. I show how people use all three kinds of rhetoric when trying to carve out the meaning of moderation:

  • Unpaid labor (often in complaints to companies)
  • Oligarchy (often in community complaints about moderators)
  • Civic participation (when moderators try to justify the effort/hassle to themselves)

Ultimately, I argue that it’s a mistake to see moderators purely as exploited workers, domineering oligarchs, or public-minded citizens. The meaning of volunteer moderation is constantly changing, since it functions to fill in the cracks that are left by platforms’ attempts to organize social relations at a massive scale. And like other influential roles in society, the meaning of moderation is constantly contested and redrawn from situation to situation.

About the Illustrations

This blog post includes anonymized illustrations of discussions in r/science. I am grateful to Sophie Diehl for conversations about illustrating this research and for open source algorithms by Anders Hoff that I modified to create the artwork.