The debate over how to moderate encrypted groups is the latest conversation surrounding regulating encrypted spaces, which has raged for decades. Many of these previous conversations surrounding encrypted moderation have been dominated by use cases such as CSAM, terrorism, and police investigations. 

These conversations have been held almost exclusively among lawyers, engineers, and law enforcement, and their values and concerns have defined the contours of the debate. However, online communities within encrypted spaces have shifted. This discussion surrounding regulation has now entered the purview of group administrators, volunteer moderators, community organizers, and trust and safety teams within technology companies. Instead of monitoring content only for illegal activity, these people also watch encrypted speech chatrooms/applications for content that violates community standards or platform terms of service — like harassment, hate speech, and threats.

The project:

The primary goal of this project is to bring historically-excluded groups into the discussion surrounding content moderation within encrypted spaces. We invite newcomers to share their stories with technologists, scientists, and policymakers working on the moderation challenge within encrypted platforms. Our goal is to transform the boundaries of this debate and redefine who the relevant constituencies are for regulating content within encrypted spaces. 

The National Science Foundation funds this project.