How can we think about the full human impact of generative AI? And how should we decide who gets included or excluded in conversations about AI governance and safety? 

Conversations about the harms of AI and how to govern them often focus on tech makers, the users of AI, and the people who face the consequences of AI decisions. Those conversations often ignore the countless hours, mental health costs, and billions of dollars spent on invisible labor for safety.

the voices of moderators have been ignored even as their work has become further monetized in pursuit of ever-larger, more powerful, more lucrative AI

Here at the Citizens and Technology Lab, we have been working with community moderators and content moderators since the first community collaborations in 2015 that motivated us to create the lab. We work alongside the public to create industry-independent evidence for flourishing digital worlds. And in the last six months, we’ve watched as the voice of moderators has been ignored even as their work has become further monetized in pursuit of ever-larger, more powerful, more lucrative AI systems.

Tech companies spend nearly 9 billion dollars a year on content moderation, with volunteers providing parallel amounts of value, according to economic estimates. Call centers for click-workers are part of economic development plans in many U.S. regions, but they also bring significant burdens to those same communities.

Artificial Intelligence trust and safety fundamentally depends on commercial and volunteer content moderation at all levels of system training and operation: 

  • Training: AI makers rely on the volunteer labor of moderators on platforms like Reddit and Wikipedia when they scrape platform data that they expect will already be relatively free of hateful, violent, and discriminatory language. 
  • Safety testing: AI producers also rely on commercial content moderation when training and evaluating systems—paying workers to review the most violent and disturbing material generated by AI systems.
  • Information pollution: When AI systems are used to create new content that people post online, they create further moderation labor for the communities whose work these systems train on.
  • Information quality: When AI experts discuss the problem of feedback loops from AI systems trained on the outputs of other AI systems, many proposed solutions rely on further unpaid and paid labor from the same moderators.

Ultimately, AI systems are creating increased demand for content moderation work that exposes people to severe mental health risks for little to no compensation. Consequently, any attempt to understand and govern AI needs to account for these labor and mental health issues.


Image source: wyvern eating its tail, from a 17th century alchemical tract