Last week, CAT Lab submitted recommendations to the US Office of Management and Budget on how to advance governance, innovation, and risk management for AI. As one of the few participatory, community science labs doing work in this space, we believe it’s especially important for the government to support and acknowledge the role that everyday Americans already play in AI governance— and the role they should play in the future of this important technology. Here’s what we sent the OMB:


The Citizens and Technology Lab (CAT Lab) of Cornell University is pleased to provide input in response to the request for comment (RFC) on OMB’s draft memorandum on Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence. These comments respond to questions 2, 3, 5, and 8 of the RFC, and provide additional feedback. In this document we:

  • Encourage the OMB to acknowledge and support the role of community/participatory science in the oversight of AI tools. 
  • Urge the OMB to develop guidelines for soliciting and supporting participation in AI innovation and governance from underrepresented groups who have been historically excluded from AI research and development – and who face disproportionate risks from AI systems.
  • Advise the OMB to account for the labor, rights, and mental health consequences for volunteer and professional communities whose work is used to train AI systems and manage AI safety.

About the Citizens and Technology Lab

The Citizens and Technology Lab (CAT Lab) at Cornell University works alongside the public to discover effective community ideas for change in ways that advance scientific knowledge. CAT Lab has worked with communities of tens of millions of people on reddit, Wikipedia, and Twitter to study socio-technical safety and reliability questions, including online safety and discrimination. 

CAT Lab is led by Dr. J. Nathan Matias, Assistant Professor at Cornell in the departments of Communication and Information Science. In 2010, he developed one of the first socio-technical testing systems for a generative language model in his capacity at one of the earliest neural network predictive text software companies (SwiftKey).

CAT Lab’s position with respect to testing, forecasting, and preventing harms from collective AI and human behavior

CAT Lab has particular expertise with AI systems that are built on adaptive algorithms-systems that train on their outputs such as systems for ranking and sorting, resource allocation, logistics management, and generative AI. As Dr. Matias has outlined in a recent article in Nature,1 the creators of these systems cannot currently make reliable assurances about the performance and safety of systems that adapt to their human environments. He has outlined the needed scientific work of testing, forecasting, and preventing harms from this kind of collective AI and human behavior, including the essential role of community/citizen science. When AI systems adapt to widely different contexts, attempts at AI transparency, accountability, and regulation will grossly fail the most vulnerable in society without significant support for community/participatory science on AI and its social impacts.

Coordination mechanisms  of OMB’s role in advancing responsible AI innovation (questions 2, 3, 8)

OMB should direct agency heads to establish a task force to build community/participatory science-based review of AI tools. 

Citizen/community science offers an important mechanism to overcome severe safety and inclusion risks of science and industry. When Dr. Safiya Noble wrote “Algorithms of Oppression,” documenting the gross misrepresentation of Black girls with images of pornography on Google, she was drawing attention to a system problem in AI systems that went largely undetected by AI creators in academia and industry for over a decade. As one of the only Black women in information science, she was partly able to observe this problem because it affected her and her communities. Over a decade later, US Information Science as a field has not increased the rate of Black women PhDs. Given the severe under-representation of minorities in computing, work on AI safety and reliability cannot wait for academia to change in order for communities to be safe.2 Citizen/community science can fill these knowledge gaps and transform the diversity of the scientific and industry workforce by broadening who participants in research and by creating pathways for opportunity.


The EPA’s program in Participatory Science for Environmental Protection is one such model, as documented in the 2022 report “Using Participatory Science at EPA: Vision and Principles (PDF).” In establishing this task force, OMB should seek agency input on funding needs for universities, civil society, state/regional/local governments and Tribal communities to pursue scientific inquiry along community designs. It is critical to begin now to build the scientific and engineering practice to understand collective patterns of human-algorithm behavior.

OMB should include indicators in any system it develops to inventory AI use cases (pursuant to section 7225 of the Advancing American AI Act) to seek clarity from Federal agencies on the diversity of analysis in safety-impacting and rights-impacting AI.

As Dr. Matias noted in a response to RFC NTIA Docket No. 230407-0093,3 the contextual dependence of AI systems perpetuates harms over long periods of time, in particular where harms occur for people underrepresented in computing. Advancing responsible AI innovation will require a coordination mechanism that reflects this context.

To engage the public in responsible AI innovation, federal agencies should consider the roles of access to data, ethics, and volunteer labor that make up the governance systems of digital decision-making.

CAT Lab notes that federal policies need to work in tandem with algorithm transparency to enable communities to produce high quality research in the public interest, and that researchers need protection from private-sector retaliation for research and documentation of harms. We have documented some of the opportunities to enable independent research in a recent Tech Policy Press article, Enabling Independent Research Without Unleashing Ethics Disasters.4

CAT Lab recommends that the OMB reflect on how federal agencies can approach AI governance with the role of semi-professional moderators in mind, and designate a person to lead in advancing our understanding of the role of volunteer moderators in advancing responsible AI innovation

Though trust and safety professionals are part of the bulwark of protection against AI harms, much of the day-to-day activity of protecting online spaces and mitigating algorithmic harm is done by volunteer moderators of online communities, whose conversations have become so-called “ground truth” for the training of AI systems. Community moderators who safeguard platforms, including Wikipedia and reddit, have already experienced overwhelming amounts of information from generative AI bots escalating the amount of spam they deal with to maintain online community spaces.5 Scholars have begun to document the disruptions from AI-generated content in these communities; the downstream effects on generative AI systems from this potential information pollution are still poorly understood.6

Similarly, the software systems that help enable volunteer moderation are often built and maintained by volunteer open-source engineers. We encourage the OMB to include the governance of volunteer and open-source sociotechnical systems as part of its  safety-impacting and rights-impacting  AI framework. OMB should also incentivize basic research and the development of open-source software and standards focused on the  mitigation of algorithmic harm.

OMB should direct agency leads to support the work of developing best standards and practices for mitigating the mental health and secondary trauma of trust and safety teams and moderators working with AI systems.

When policymakers discuss the safety and rights implications of AI, they often consider end-users of AI— but AI development also involves managing the safety and rights of the workers who train AI systems and monitor their safety. This work, as documented by Siddharth Suri and Mary Gray in “Ghost Work”, Alexa Koenig and Andrea Lampros in “Graphic: Trauma and Meaning in Our Online Lives”, and by Sarah T. Roberts in “Beyond The Screen,” can expose workers to severe secondary trauma and mental health risks that psychologists have still not discovered how to reliably mitigate. There is much more research to be supported in this developing field of AI mitigation – for example, emerging research around AI’s ability to render style-transfer filters to mitigate impact.7 At Cornell, we have a team currently working on compiling a systematic review of these secondary trauma issues, how to measure the psychological harms from training AI systems to be safe, and how to introduce workplace protections that safeguard the health of the people behind the screen of AI.

Use cases for presumed safety-impacting and rights-impacting  AI (Section 5 (b)) that should be included, removed, or revised (question 5)

Harms from human-algorithm interaction in AI systems should be considered to develop and change both in their harms and benefits over time. OMB should consider expanding the list of safety and rights impacting AI to specify arenas where the initial effect may not yet be easily observed, but where research has shown initial effects, specifically:

  1. The chilling effect on freedom of speech that emerges from the use of AI to establish automated law enforcement to deter behavior. This includes systems of legal action in which a computer is responsible for unsupervised decision-making, which distributes or outsources surveillance and enforcement to parties that are not governments. 
  2. The potential for such automated law enforcement tools to be deployed in requirements to watermark generative AI content. Early research has shown that receiving takedown notices from copyright enforcement bots drives down individuals’ rate of posting, on average, regardless of whether the takedown notice was legitimate or not.8 

* * * * *

Thank you for the opportunity to provide comments. The Citizens and Technology Lab looks forward to serving as a potential resource for policy discussions on this issue. 

Elizabeth Eagen, Deputy Director, Citizens and Technology Lab, Cornell University

Dr. J Nathan Matias, Assistant Professor, Cornell University Department of Communication, Founder & Executive Director of the Citizens and Technology Lab

Dr. Sarah Gilbert, Research Director

Eric Pennington, Lead Data Architect

Dr. Edward L. Platt, Scientific Consultant

Alexandra Gonzalez, PhD Student, Cornell University

Lucas Wright, PhD Candidate, Cornell University

  1. Matias, J.N. (2023) Humans and algorithms work together — so study them together. Nature 617, 248-251 (2023). https://doi.org/10.1038/d41586-023-01521-z ↩︎
  2. Matias, J.N., Lewis, N.A. & Hope, E.C. US universities are not succeeding in diversifying faculty. Nat Hum Behav 6, 1606–1608 (2022). https://doi.org/10.1038/s41562-022-01495-4 ↩︎
  3. Matias, J.N. (2023). AI Policy Will Fail Society Without Community Science. https://citizensandtech.org/2023/06/ntia-submission-06-2023/ ↩︎
  4. Lukito, Josephine, Matias, J.Nathan, Gilbert, Sarah (2023 May 10). Enabling Independent Research Without Unleashing Ethics Disasters. Tech Policy Press.  https://www.techpolicy.press/enabling-independent-research-without-unleashing-ethics-disasters/. ↩︎
  5. Clarke, Laurie (2023 April 11). Reddit Moderators Brace for a ChatGPT Spam Apocalypse. Vice. 
     https://www.vice.com/en/article/jg5qy8/reddit-moderators-brace-for-a-chatgpt-spam-apocalypse ↩︎
  6. Lloyd, T.,  Reagle, J.,  Naaman, M. (2023). “There Has To Be a Lot That We’re Missing”: Moderating AI-Generated Content on Reddit. aRxiv. https://doi.org/10.48550/arXiv.2311.12702. ↩︎
  7. Sarridis, I, Spangenberg, J., Papadopoulou, O., & Papadopoulos, S. (2023). Mitigating Viewer Impact from Disturbing Imagery using AI Filters: A User-Study. arXiv. https://doi.org/10.48550/arXiv.2307.10334 ↩︎
  8.  Matias, J.N., Mou, M., Penney, J., Klein, M., Wright, L. (2020). Do Automated Legal Threats Reduce Freedom of Expression Online? Preliminary Results from a Natural Experiment. https://osf.io/nc7e2/
    ↩︎