How can governments ensure that policies for AI transparency and accountability actually serve the common good— preventing and remediating the harms they are designed to govern?

Last week, CAT Lab made a submission to the U.S. National Telecommunications and Information on this issue for their “AI Accountability Policy Request for Comment.” Here it is:


Dear National Telecommunications and Information Administration,

I am writing in response to your “AI Accountability Policy Request for Comment” Docket No. 230407-0093. 

I am an Assistant Professor at Cornell in the departments of Communication and Information Science, a fellow at the Center for Advanced Studies in the Behavioral Sciences at Stanford, and an Associate Research Scholar at the Knight First Amendment Institute at Columbia University. In 2010, I developed one of the first socio-technical testing systems for a generative language model in my capacity at one of the earliest neural network predictive text software companies (SwiftKey). 

Last month, I published an article in Nature on the scientific challenge of testing, forecasting, and preventing harms from collective AI and human behavior. I encourage you to read the short paper, which outlines the scientific challenge and the case for community/participatory science in greater detail (Matias 2023). In this comment, I will focus on one centrally important issue which may go under-remarked by others.

My central point is this: due to the nature of AI systems, if the history of science is any guide, attempts at AI transparency, accountability, and regulation will grossly fail the most vulnerable in society without significant support for community/participatory science on AI and its social impacts.

The history of environmental regulation provides us with an instructive disaster to learn from. In 1963, the Clean Air Act set in motion an air quality monitoring that has contributed significantly to public health. Yet while lower income, Black, and brown communities experience worse air pollution on average in the US, government sensors and scientific research have tended to be placed in areas with White and wealthy residents (Grainger & Schreiber 2019). The consequence for public health has been that the most vulnerable communities have continued to suffer the worst from air pollution, since regulators have tied their own hands with scientific blindspots. The devastating effects of these blindspots were recently showcased by journalists at ProPublica who deployed their own sensors within low-income communities in Louisiana that didn’t have access to government sensors (Younes et al 2021). The EPA has now begun to address this blind spot with programs that fund training and equipment for participatory air quality sensing, especially in low-income and tribal communities.

Artificial Intelligence is a complex and context-dependent regulatory challenge like pollution. AI is produced at an industrial scale and deployed widely in different ecosystems where its benefits and harms are highly dependent on the context where it is used. Due to this context dependence, any blind spots in AI measurement and transparency will perpetuate severe harms over long periods of time—especially when harms occur for people from groups that are under-represented in computing such as women, Black, Latino/a, Hawaiian/Pacific Islanders, and Indigenous Americans (Matias & Lewis 2022).

Some scientists will argue that because community/participatory science does not currently have access to information about the inner workings of algorithms, such work has little value. But many of the most pioneering and influential studies in algorithm accountability have come from journalists and community science initiatives outside of academic science. People who have lived experience expertise often develop profoundly insightful perspectives that they can translate into scientific knowledge when given the chance, as has already been the case for many of the most groundbreaking studies on the harms of AI systems (Matias 2023). Federal policies on algorithm transparency and accountability can further enable affected communities to produce high quality research in the public interest by ensuring that transparency policies support and protect civil society and journalists to gain access to internal evidence about companies and algorithms, with appropriate ethical safeguards (Gilbert et al 2023).

In 2009, the political scientist Elinor Ostrom was awarded the Nobel Prize for observing what elements are essential to managing an evolving technology problem in context-dependent complex systems. One essential component of successful governance, she argued, is to support monitoring by community groups (Dietz, Ostom, Stern 2003). As the NTIA considers approaches for AI accountability, you would do well to do the same.

This comment responds to the following AI Accountability Objectives:

Question 1: community/participatory science in AI accountability is already being used to detect new harms, identify noncompliance with legal standards, human rights, and other mechanisms, and also test the effectiveness of remediation strategies.

Question 3 e & g: community/participatory science in AI accountability is an essential part of any meaningful contestation, redress, and consultation from affected people

Question 11: we can learn considerable lessons from environmental science, particularly the benefits of community/participatory science for detecting and remediating harm.

Question 15 a & d: Community/participatory science provides a way to distribute monitoring and accountability to a large number of contexts in ways that also deliver significant educational benefits to society, if it is widely enough supported.

Question 16 a & c: Community/participatory science starts with the point of impact and can work backward through the whole supply chain of systems, with sufficient support for transparency. Any system that disallows or under-resources research at the point of human impact will fail to achieve meaningful policy relevance in people’s lives.

Question 21: Any policy that enables transparency should affirmatively support and protect efforts by journalists, civil society, and impacted members of the public to investigate and study the systems that they are encountering (Gilbert, Matias, Lukito 2023).

Question 31: The government should fund community/participatory science to support a strong AI accountability ecosystem. This will include funding for universities, civil society, State/regional/local governments and Tribal communities. The EPA’s program in Participatory Science for Environmental Protection is one such model, as documented in the 2022 report “Using Participatory Science at EPA: Vision and Principles.”

Thank you for taking on this important issue. If I can be of any help on this issue, do reach out.

Dr. J. Nathan Matias

Cornell University

References

Photo: Students in West Virginia monitor water quality and biodiversity. U.S. Fish and Wildlife Service