In his State of the Union address, President Biden made a point to revive his administration’s focus on passing new regulation on the technology sector. Since he took office in 2021, the Biden Administration and members of Congress from both parties have shown considerable interest in regulating algorithms. This includes a bill introduced in the last Congress called the Algorithmic Accountability Act of 2022, which would have created a Bureau of Technology within the Federal Trade Commission (FTC) responsible for overseeing “algorithmic impact assessments” within private companies from a wide range of sectors. 

While the bill was not passed, we continue to see interest in the Biden Administration for regulation of algorithms and AI, including the White House’s AI Bill of Rights released in October of last year and, more recently, the National Institute for Standards and Technology (NIST) releasing an Artificial Intelligence Risk Management Framework. While NIST uses the language of “risk assessments”, the type of evaluation of algorithms they propose is in many ways similar to how researchers and advocates have thought about impact assessments. 

What would an algorithmic impact assessment actually look like in practice? Because the appetite for this kind of regulation continues to grow, I’m sharing a draft of two mock algorithmic impact assessments I conducted last year with Cornell students Nicolas Clark and Winfield Mac. 

Using a modified framework for algorithmic impact assessments in healthcare developed by the Ada Lovelace Institute, we conducted impact assessments on an algorithm used in the UK in 2020 to predict A level exam scores when the pandemic made the exams unsafe and a 2018 change to Facebook’s New Feed ranking algorithm in an attempt to promote different kinds of content in users’ feed. In both cases, these algorithms had unintended consequences and were widely criticized as harmful, so we conducted our mock assessments with the benefit of hindsight, asking what a successful impact assessment would look like in each case. 

In conducting these assessments, we developed a few key reflections:

  • Across sectors and types of algorithms, the process, inputs, and expertise required to accurately predict the impact of an algorithm is very different, and any agency overseeing impact assessments will need flexible standards for these different scenarios.
  • Involving the perspectives of the communities that will be affected by the algorithm is essential to any impact assessment. 
  • Access to training data and user behavioral data is sometimes a necessary requirement, but not always. 

Impact means different things to different contexts and types of algorithms. The A levels algorithm didn’t involve any machine learning, so access to training data wasn’t necessary for analyzing risk. And its application was limited to a single use. An accurate prediction of its risk would have been reasonably foreseeable with knowledge of the equation used for calculating scores, knowledge of systemic inequalities in the British educational system, and engagement with stakeholders such as teachers. 

The assessment of the 2018 change to the Facebook News Feed algorithm required a very different set of inputs and expertise. Correctly assessing the risk of this algorithm would require access to internal data, including training data and data on user behavior, as well as knowledge of the complex system of other algorithms and processes that the News Feed algorithm interacts with, including risk posed by those algorithms. 

Hypothetically, algorithms like both of these would be covered by the Algorithmic Accountability Act, but our exercise shows that passing on the job of defining the standards for impact assessments to a single agency enforcing their application across industries is likely to be considerably difficult, as those standards will need to vary greatly across industries and use cases. 

While the Algorithmic Accountability Act wasn’t passed in the last Congress, the idea of government regulations requiring companies to assess the future risks posed by algorithms hasn’t gone away. So while the idea of a Bureau of Technology overseeing industry impact assessments may not come to fruition, the challenges and lessons learned from our exercise is likely to be relevant for similar legislation or any federal agency that attempts to incorporate algorithmic impact assessments in their work.