What if scaling a system of digital governance through algorithmic tools does not merely extend it to more cases but alters it? How does automating the enforcement of the same set of rules, procedures, and requirements transform the form of power exerted by a platform?

In newly published research, I asked these questions of automated moderation on Reddit and found that the scale enabled by automation both transforms the original distribution of power in Reddit’s system of governance while also obscuring those changes. 

enforcing a subreddit rule through automation and at scale allows new manipulations of visibility — of content and of moderation itself — that weren’t originally built into the platform’s governance mechanisms

Content moderators across social media platforms rely on algorithmic tools to automate many of their tasks. On Reddit, the most popular tool is a bot called AutoModerator, which is used by many subreddits to automate posts and comments, auto-assign flair, and to remove content. But the bot does more than make removals more efficient or more accurate. It also transforms the system of governance that it sits within.

For example, enforcing a subreddit rule through automation and at scale allows new manipulations of visibility — of content and of moderation itself — that weren’t originally built into the platform’s governance mechanisms. This means that by scaling individual removals of posts that violate an existing rule to instantly remove all posts from a specific account without notifying that account, AutoModerator is able to recreate the effects of a ‘shadowban’ in practice. Without changing the rules of the subreddit, the bot has undermined basic rules about the visibility of moderation on Reddit.

These findings should help to shift our understanding of automated moderation in practice, especially as policymakers and platform moderators alike seek to use automation to make the internet safer.

There’s something even more complex at play, though, in the deployment of bots to moderate content: the ill-defined boundaries of the categories of speech that they are meant to remove makes tracking their transformative effects especially difficult. 

AutoModerator, for example, is often presented as an anti-spam tool. Spam has what Finn Brunton calls a “productive blurriness,” meaning that the unclear boundaries of what counts as spam can actually be a constructive foundation for online communities to work out what they stand for. But the power AutoModerator has to reconfigure the rules of governance are concealed within that blurriness — the bot is presented as merely an anti-spam tool rather than a force that shapes many of the basic premises of governance on the platform. 

These findings should help to shift our understanding of automated moderation in practice, especially as policymakers and platform moderators alike seek to use automation to make the internet safer.

Laws that incentivize more automation in content moderation won’t simply make enforcing existing rules more efficient. Instead, they will transform the platform’s basic system of governance. If we care about scalable moderation, we should also care about how those changes will shift power, even if the algorithms were to be perfectly accurate or free of bias.

It’s clear that AutoModerator is an essential tool for managing almost any subreddit. The moderators I spoke with in my research already know from firsthand experience that automated moderation can shift power, and they expressed concerns over how those new forms of power are used. But AutoModerator, for any flaws, is the best tool they have available. This research invites us to think hard about the power dynamics of automated moderation tools — not only how accurate their decision-making is — when designing and deploying them.