How can democratic societies host hard conversations about our conflicts in ways that keep important conversations going rather than shut them down? If you’ve been online at all this year, you’ve probably witnessed a heated conversation with deep disagreement, whether it’s about war, relationships, injustice, or even just the question of who gets to speak.

While some disagreement may be productive, it’s not uncommon to see conversations slip into rule-violating behaviour, such as flaming or personal attacks. On Reddit, volunteer moderators are tasked with reducing this kind of behaviour. However, they have limited tools at their disposal, relying mostly on punitive measures, such as removing content and banning users. While these tools can help stop violations in their tracks, they may not help users learn how to participate productively in a community in the future.

  • How can we improve users’ experience of bans and suspensions?
  • What can online communities do to improve diverse dialog on contentious topics?

Subreddits, especially large ones, often see a lot of egregious behaviour and bad faith contribution, particularly when topics at hand are contentious. It’s not uncommon for moderators to see out-of-bounds contributions from people who were caught up in the moment or didn’t understand community norms. Mods deal with bad faith contributions by issuing bans that permanently prevent people from participating in the future. But to deal with lesser violations, communities typically use temporary suspensions to inform people about how serious their non-normative behavior has been, and give them time to cool off (as a note, we are using the terms “permanent bans” and “temporary suspensions” for clarity—Reddit’s system does not make such a distinction. Users on the receiving end of these kinds of infractions get a notice that they have been “permanently banned from participating” in the community or “banned from participating in [community] for [x] days.”).

Open questions remain about how effective temporary suspensions are in mitigating future violations and helping people come back to be meaningful contributors to the community. We want to help communities with this through field experiments.

We’re recruiting communities who might want to explore this together. As the first step toward designing new research together, we would need to do a bit of preliminary analysis and would provide your team with:

1. A summary of posts, comments, comment removals, and unique active accounts. If there was a particular event that might have impacted behaviour in your community, we could do that too! The table would look something like this:

Posts Comments Comment removalsUnique Active Accounts
Before Event
After Event

2. Since we are interested in the experience of commenters who receive bans and suspensions we would look at those too. The table would like something like the one below. As with the first table, we could also provide comparisons before or after a significant event.

#
Experienced
a Comment
Removal
%
Experienced
a Comment
Removal
#
Experienced
a Post
Removal
%
Experienced
a Post
Removal
#
Experienced 
Suspension
or Ban
%
Experienced  Suspension or Ban
Before event
After event

3. In an ideal world, people who do get suspended are able to return and participate substantively in a respectful and productive way. To inform future research that tests interventions to support this kind of successful community participation, we would analyze differences in behavior for accounts that receive temporary suspensions or permanent bans and what happens after their accounts are restored. To do this, we would create a dataset that analyzes participation in the six months before and after an account received their first temporary suspension. We would then create two other comparison groups: (a) accounts whose first ban was permanent and (b) a random sample of accounts that were active on the same days as those that received temporary suspensions. That analysis would provide you with the details in this table:

The image shows a table with 13 columns and 5 rows. The table is divided into two main sections: 6 months before a ban or a suspension and 6 months after. There is information about three groups: accounts with temporary suspensions, a random sample of accounts with no suspensions, and accounts with permanent bans. For each of those two categories the following information is available about the two sections (6 months before a ban and 6 months after): median comments, median % removed, median posts, and median posts removed. For the second group, there is also information about what percent are active and what percent are suspended again. There is also information about the totals.

Directions for future research

Based on these initial analyses, we would outline a field experiment where we would test the success of temporary suspension messages in helping people follow rules after receiving this type of warning. For example, we might look at the following norm interventions:

  • Messages that explain what a temporary suspension means
  • Messages with guides on how to contribute productively to the community
  • Messages from different standpoints that describe the value of a functional shared conversation
  • Messages that help people productively manage their moral outrage, which is one possible driving force behind heated conversations that get out of hand
  • Your ideas here!

Interested in participating?

If you are a moderator of a community that is interested in fielding experiments to test ideas to help people learn to contribute meaningfully to a community after a temporary suspension, please reach out to Sarah Gilbert either through Reddit (/u/SarahAGilbert) or email (sarah.gilbert@cornell.edu).

When you write to us, please check the following details. If you’re unsure, we can talk through it with you:

  • Are you making at least dozens of temporary suspensions a month?
  • Do you have at least a year of archived mod logs, for any period covered by the Pushshift dataset? (Reddit’s founding up to April 2023)
  • If your mod log data is more recent (i.e., after the period covered by the Pushshift dataset), do you have an alternative source of data that we could work with?

Image source: Social Network Visualization, Wikimedia Commons