In March 2022, the U.S. White House Office of Science and Technology Policy requested comments on how to revise the Federal Strategy for AI R&D. CAT Lab submitted a comment, along with a copy of J. Nathan Matias and Lucas Wright’s March 2022 article for SSRC’s Just Tech Platform on Impact Assessment of Human-Algorithm Feedback Loops. Our comment is below:

Actor Jennifer Lawrence was caught in an escalating cycle of human-AI behavior in 2014 when her intimate photos were stolen and circulated on the social platform Reddit. As people clicked and up-voted the pictures, Reddit’s algorithms showed them to even more people, encouraging the algorithm further. When asked about the experience, Lawrence told Vanity Fair, “It just makes me feel like a piece of meat that’s being passed around for a profit.” 

the U.S. risks being unprepared to lead safe innovation or reliably govern escalating catastrophes of human and AI behavior.

Because the 2019 U.S. National R&D Strategy on AI doesn’t include the kind of AI systems that caused Lawrence and others so much harm, the U.S. risks being unprepared to lead safe innovation or reliably govern escalating catastrophes of human and AI behavior. Feedback happens when humans and adaptive algorithms react to each other in ways that change algorithm behavior without further involvement from engineers. And this feedback is everywhere—directing law enforcement, managing financial systems, shaping our cultures, coordinating remarkable generosity, and shaping democratic participation. Yet according to Meta’s President of Global Affairs, even this leader on artificial intelligence is unable to reliably predict or prevent catastrophes of human and algorithm behavior (Clegg 2021), a view that many scientists agree with (Bak-Coleman et al 2021).

This comment has bearing on Strategy 3 on the ethical, legal, and social implications of AI, Strategy 4 on the safety and security of AI, and Strategy 8 on public-private partnerships.

We are researchers at the Citizens and Technology Lab at Cornell University (CAT Lab), who conduct citizen / community science on our digital environments. CAT Lab works alongside the public for a world where digital power is guided by evidence and accountable to the public, inspired by the history of community science on food safety, consumer protection, and the environment. Throughout the history of the U.S., industry-independent testing and accountability from community groups, academic researchers, and government agencies has contributed significantly to the parallel growth of industry and public safety. CAT Lab conducts research and development to create the science, policy, and community capacities for similar progress in artificial intelligence and digital technology.

This week, we published a new article with the Social Science Research Council about impact assessment of human-algorithm feedback, with recommendations for scientists, policymakers, and communities. In this note, we summarize key recommendations for the U.S. National AI R&D Strategic Plan.  We also attach the longer article for your reference. Thank you for this opportunity to provide input.

1. Develop a strategy for adaptive AI

Proposals for regulating AI often focus on the bias and accuracy of decisions made by algorithms. Bias is a useful concept for evaluating judgments that need to be impartial and independent (Barocas et al 2019). Unlike decision making systems, adaptive algorithms are not intended to make impartial, consistent decisions every time, regardless of context. Instead, adaptive systems designed for social media content recommendations, predictive policing, financial trading, and route-mapping are continuously changing their behavior. Consequently, static data evaluations cannot protect people from runaway feedback loops between humans and these algorithms (Ekstrand et al 2022; Lucherini et al, 2021). 

2. Support involvement from affected communities at all levels of AI strategy and policy

Much of the most influential research and policy work on AI policy has come directly from affected communities, despite substantial attempts from technology operators to hinder independent investigation and oversight (Charles 2020; Cox 2017). One promising model for improving AI safety and equity is to conduct impact assessments, a model from environment management that includes affected communities in risk and benefit assessment in advance of system development and introduction (Moss et al 2021; Reisman et al 2018). 

The people affected by an issue have the most at stake and the greatest understanding of the context, making them essential conversation partners at all levels of AI development and use. Algorithm designers sometimes involve communities in the design and training of AI systems systems (Halfaker & Geiger, 2019). In Chicago, researchers and community organizations coordinated with formerly gang-involved youth to develop an alternative to the city’s Strategic Subjects List (Frey et al., 2020). Affected communities have also pioneered algorithm monitoring and accountability, often out of necessity (Matias, 2015; Matias & Mou, 2018). By acknowledging and resourcing community contributions as part of a national strategy, the U.S. can strengthen the excellence, safety, and equity of AI systems.

we can integrate community involvement from the start rather than remediate disasters decades later

3. Support community-engaged basic research

A national R&D strategy for Artificial Intelligence can draw from the enthusiasm of our nation’s citizens to make discoveries to ensure that AI serves the common good. Community-led, use-inspired basic research has been a staple of effective research, development, and governance of other complex industries, including food safety, water safety, the environment, and automotive testing (Stokes 2011; Blum 2019; Dietz & Ostrom 2003; Merrell et al 1999). For example, the E.P.A recently dedicated $20m in grants to support communities and tribal groups to install air quality sensors that could advance basic science while also providing early warnings of harmful pollution. The E.P.A. also funds water quality monitoring, bacteria monitoring, and crowdsourced environmental violations reporting.  With this country’s AI strategy, we can integrate community involvement from the start rather than remediate disasters decades later.