Tech companies often say they’re working to improve the well-being of content moderators. But how can the industry make those improvements if scientists don’t yet know how to reliably measure the psychological impacts of this very challenging work?

Last November Niamh McIntyre at the Bureau of Investigative Journalism published a report based on interviews with over 40 content moderators from dating apps like Grindr, Bumble, and Tinder. McIntyre tells the story of Gael, a former freelance content moderator for Bumble. Working from home, Gael would review up to 600 cases of suspected abuse over five to six hours each day. Just like first responders in other fields, the job can come with severe mental health impacts. A year after quitting the job, Gael still struggled to discuss cases of child sexual abuse, “Speaking about them brings back the images; I couldn’t sleep well for a few weeks afterward” (TBIJ, 2023).

There’s a big difference between knowing that something is wrong and knowing enough to work on solutions. With reliable ways to measure the mental health impacts, researchers could evaluate different ideas to make content moderation more livable.

Many content moderators (and now AI workers) face mental health challenges similar to what Gael experienced. Multiple survey studies have found that moderators who are regularly exposed to traumatic online content face psychological challenges that affect their well-being (Greinacher et al., 2019; Lee et al., 2018; Cook et al., 2022; Schöpke-Gonzalez et al., 2022). This survey research, alongside many qualitative studies of content moderators, has shown that this important first response work also creates harms for the people who do it (Roberts, 2019; Jereza, 2021).

There’s a big difference between knowing that something is wrong and knowing enough to work on solutions. With reliable ways to measure the mental health impacts, researchers could evaluate different ideas to make content moderation more livable. But researchers have used measurements from other areas that are only somewhat similar to content moderation— burnout, PTSD, or Secondary Traumatic Stress associated with healthcare, social work, and emergency response (Spence et al., 2023). The scale and intensity of content moderation work is greater than those fields. While ambulance staff do witness traumatic imagery, they don’t have to view hundreds of cases per hour, day after day like Gael.

Content moderators like Gael might also be under-supported due to biases in the available science. Common measurements like the Maslach Burnout Inventory (MBI) have been critiqued for having an “Americanized” perspective, potentially leading to errors in understanding the effects of traumatic outside the U.S. (Demerouti et al., 2001). Many moderators like Gael live and work in the Global South, where standardized assessment tools might fail to detect important and damaging psychological impacts (Roberts, 2019; Jereza, 2021).

In this project, we are working to imagine what it would take to create reliable measures of the psychological impacts of content moderation work that could help test ways to improve the well-being of moderators. As a first step, we’re conducting a systematic review of the different measures that are already being used, both within content moderation and beyond. We hope this review can serve as a starting point to creating survey measures that are more suited to content moderation, in ways that can be adapted for different cultures.

Will content moderation ever be a job that has no psychological costs? It’s hard to imagine a version of this work that isn’t traumatizing at all. But there are many practical things that organizations can try that could make this important work safer and more sustainable for the people who do it. The search for what works will depend on better measurement of those psychological costs. In time, we hope that people like Gael will get the support and care they need, even as they take on this important care for others.

References:

Cook, C. L., Cai, J., & Wohn, D. Y. (2022). Awe versus aww: The effectiveness of two kinds of
positive emotional stimulation on stress reduction for online content moderators.

Proceedings of the ACM on human-computer interaction, 1-19. doi:https://doi.org/10.1145/3555168

Demerouti, E., Bakker, A. B., Nachreiner, F., & Schaufeli, W. B. (2001). The Job Demands-Resources Model of Burnout. Journal of Applied Psychology, 86(3), 499-512. doi:http://dx.doi.org/10.1037/0021-9010.86.3.499

Greinacher, A., Derezza-Greeven, C., Herzog, W., & Nikendei, C. (2019). Secondary traumatization in first responders: A systematic review. European Journal of Psychotraumatology, 10(1), Article 1562840. doi:https://doi.org/10.1080/20008198.2018.1562840

Jereza, R. (2021). Corporeal moderation: Digital labour as affective good. Social Anthropology, 29(4), 928-943. doi:https://doi.org/10.1111/1469-8676.13106

Lee, J. J., Gottfried, R., & Bride, B. E. (2018). Exposure to client trauma, secondary traumatic
stress, and the health of clinical social workers: A mediation analysis.
Clinical Social Work Journal, 46(3), 228-235. doi: https://doi.org/10.1007/s10615-017-0638-1

Roberts, S. T. (2019). Behind the screen: Content moderation in the shadows of social media. Yale University Press. doi:https://doi.org/10.2307/j.ctvhrcz0v

Schöpke-Gonzalez, A. M., Atreja, S., Shin, H. N., Ahmed, N., & Hemphill, L. (2022). Why do volunteer content moderators quit? Burnout, conflict and harmful behaviors. New Media & Society. doi:https://doi.org/10.1177/14614448221138529

Spence, R., Bifulco, A., Bradbury, P., Martellozzo, E., & DeMarco, J. (2023). Content Moderator Mental Health, Secondary Trauma, and Well-being: A Cross-Sectional Study. Cyberpsychology, behavior and social networking, 10.1089/cyber.2023.0298. Advance online publication. https://doi.org/10.1089/cyber.2023.0298

McInTyre, N. (2023, November 20). Behind every swipe: The workers toiling to keep dating apps safe. The Bureau of Investigative Journalism.