This year, New York City put into effect the world’s first law requiring transparency for hiring algorithms—software systems that review and rank job applicants. If you have ever been interviewed by an AI or are wondering if your resumé was ranked by an AI system, then this law was designed to help you. Many of these algorithms (called AEDTs) are notoriously biased, and the city wanted to jumpstart accountability with the new law.

According to Local Law 144, employers hiring in the city are required to pay a third-party auditor to analyze their algorithms for bias and then make the audit report public. Employers may also be required to give job-seekers the chance to opt out. If they don’t comply, they face penalties between $500 and $1500 per day.

This law is new, untested, and only a few months old. It’s also a law that the world is watching, since it’s the first law that sets out to protect job applicants from unfair algorithm hiring. Our project was designed to help job seekers by looking at employer practices, help employers by studying the user experience they’re creating, and help lawmakers by helping them understand what the law is achieving and where it falls short.

“The world’s first law to mandate publicly transparent algorithmic auditing has actually created incentives to withhold data and avoid auditing.”

In this partnership across CAT Lab, the Data & Society Research Institute, and Consumer Reports, we recruited 155 undergraduate students from Cornell’s Communication and Technology class to systematically analyze how hundreds of employers responded to Local Law 144. We also followed up with employers over email and on the phone. We wanted to learn:

  • How many employers are complying with Local Law 144?
  • What is the experience for job-seekers?
  • How biased are company’s hiring algorithms?
  • What can creators of future algorithm transparency laws learn from the response to this law?

What we found

  • Out of 391 employers, 18 employers published hiring algorithm audit reports, and 13 posted transparency notices informing job-seekers of their rights. This compliance by some employers represents an important early step in algorithm transparency and regulation.
  • Most employers implemented the law in ways that make it practically impossible for job-seekers to learn about their rights or exercise them under Local Law 144.
  • We strongly doubt all employers are complying with the law’s transparency requirements. While we can’t prove this, our data is consistent with a situation where most employers are withholding audit results to reduce the company’s chances of facing federal enforcement or discrimination lawsuits.
  • Some employers have stopped using hiring algorithms in New York City while complying with the law. Given how notoriously unreliable hiring algorithms can be, this may have reduced algorithm bias somewhat. Loopholes in the law prevent us from knowing for sure.
  • Overall, the law is giving work to a new profession of algorithm auditors and making some employers more transparent. But the law is not helping job-seekers or improving overall algorithm transparency because it gives employers extreme discretion over compliance and strong incentives to avoid transparency. Our paper suggests ways to close these loopholes.

What’s The Problem of Null Compliance?

Because Local Law 144 gives companies so much discretion over whether to follow the law, it’s impossible in most cases to tell whether a company is complying or not. That makes it impossible for job-seekers to know if they can file a complaint, makes it impossible for the city to know if companies are following the law, and makes it impossible for researchers to reliably study algorithm bias. This is such a messy problem that we had to create a new term for it: Null Compliance.

How to cite our paper

Wright, L., Muenster, R., Vecchione, B., Qu, T., Cai, S., Smith, A., Student Investigators, Metcalf, J., Matias, J.N. (Jan 22, 2024) Null Compliance: NYC Local Law 144 and the Challenges of Algorithm Accountability. DOI 10.17605/OSF.IO/UPFDK

In the news

This research project has been cited in the following news articles:

Research Dataset:

Alongside the paper, we have published an archive of the data and results of this project, including records of all audit reports and transparency notices. This is the first large-scale study of compliance and user experience for the world’s first law mandating algorithmic bias audits for commercial products.

You can find our archive on the Open Science Framework: osf.io/upfdk/

The archive includes five things

  1. An investigator dataframe with information about the student investigators’ experience conducting their searches. This includes a column with unverified bias audits and transparency notices that should not be interpreted as compliance. 
  2. An employer dataframe with one row per employer. This is the canonical list of cases of public bias audits and transparency notices as of (X DATE) to our knowledge, based on our research methodology. 
  3. An impact ratio dataframe with one row for every impact ratio we observed across all of the audits we discovered in our study. 
  4. A folder for every document we collected per employer
  5. A data descriptor explaining all of these resources

Our dataset is not an ongoing archive, and New York City did not commit to keeping records of the AEDT audit reports. We strongly encourage governments considering algorithm transparency laws to include provisions for ongoing, centralized repositories for the documentation those laws require.

Note to Employers: If you represent an employer in this dataset and weren’t able to respond to our message asking for corrections, please complete this form with information about the state of your company’s website by the end of November 2023. We will update the dataset and the final paper in final revisions as part of the peer review process later in 2024.

This article is a pre-print, which means we’re still revising and improving it. If you find an error not addressed by the form, please contact Lucas Wright at <law323@cornell.edu> and Nathan Matias at <nathan.matias@cornell.edu>.

Project Team

  • Lucas Wright, PhD student, Cornell University
  • Roxana Mika Muenster, PhD student, Cornell University
  • Briana Vecchione, Technical Researcher, Data & Society Research Institute  
  • Tianyao Qu, PhD student, Cornell University  
  • Pika (Senhuang) Cai, PhD student, Cornell University 
  • Alan Smith, Community Leadership, Consumer Reports
  • COMM/INFO 2450 Student Investigators, undergraduate students, Cornell University
  • Jacob Metcalf, Program Director, AI on the Ground Initiative, Data & Society Research Institute
  • J. Nathan Matias, Assistant Professor, Citizens and Technology Lab, Cornell University

Where can I learn more about NYC Local Law 144?

Acknowledgments

We are deeply grateful to the students of COMM/INFO 2450 for their thorough and extensive collaboration on this project and to the Cornell University Department of Communication for supporting this classroom research, among them Amelia Neumann, Andrew Wu, Angelina Chen, Anjiya Amlani, Anushka Shorewala, Bella Samtani, Bingsong Li, Carina Wang, Caroline Michailoff, Chelsea Lin, Chengling Zheng, Diana Flores Valdivia, Doan-Viet Nguyen, Dora Xu, Erik Starling, Evelyn C Kim, Gianna Chan, Haley Qin, Hannah M. Yeh, Hermione Bossolina, Hope Best, Ingrid Gruener Luft, Jacob Levin, Jimin Kim, Jolene Ie, Kashmala Arif, Katherine Hahnenberg, Kathryn M. Papagianopoulos, Kevin Jianzhi Wang, Kexin Li, Kimmie Jimenez, Lili Mkrtchyan, Lindsay Peck, Maksym “Max” Bohun, Mark Timothy Bell, Mika Labadan, Minh H. Le, Neha Sunkara, Nicholas Bergersen, Nicholas Won, Nicole Tian, Noah Salzman, Nuo Cen, Omar Ahmed, Owen J. Chen, Reinesse Wong, Sebastian Klein, Shukria Mirzaie, Simah Sahnosh, Siying Cui, Sophia Torres Lugo, Sritanay Vedartham, Subhadra Das, Thej Khanna, Varsha Gande, Weiyan Zhang, Wen Yu Chen, Yanran Li, Yiwen Zhang, Yuchen Yang, Yuyan Wu, and Zoey Arnold. 

We thank our other collaborators on related projects, namely Alexandra Mateescu (who supported the pilot audit collection), Lara Groves, Andrew Strait, Alayna Kennedy, and Ranjit Singh, for briefing student investigators on NY LL 144 and helping imagine this project in its early stages. We also thank the Data & Society RAW Materials Seminar participants for their insightful comments. 

We are also grateful to the Cornell University Department of Communication for supporting this classroom research. J. Nathan Matias received financial support from the Center for Advanced Study in the Behavioral Sciences, the Lenore Annenberg and Wallis Annenberg Foundation, and the Siegel Family Endowment. Researchers at the Data & Society Institute were supported by a grant from the Omidyar Network and, in part, by a grant from the Open Society Foundations.