This week, CAT Lab joined other parts of Cornell University as founding members of the U.S. AI Safety Institute Consortium— alongside many other university researchers, civil society organizations, and tech companies

Since NIST’s earliest work a century ago to standardize measures and expose frauds deceiving consumers with false data, the institute has advanced the public good and the economy by organizing trusted standards for emerging innovations. At the time NIST was founded, it was nigh-impossible to hold companies accountable for false advertising or dangerous products because no one could agree on the meaning of basic things like: what’s a gallon, what do we mean by milk, and how long is a foot?

Without agreed upon definitions, people facing dangerous or biased AI can’t get problems fixed or seek justice. As pioneers in the community/citizen science of our digital environments, CAT lab has a decade of experience designing scientifically-rigorous methods that anyone can use to understand our digital environments. By joining NIST as founding members of the AI Safety Institute, we will advocate for standards that actually work for people in the public interest.

Starting in 1905, NIST began the important work of creating standard references for the definitions of measurements, chemical ingredients, and other basic building blocks of modern society. Since then, NIST has served Americans by standardizing rail safety, defining standards for fire extinguishers, measuring radiation safety, and creating standards for forensic science in the case of failures and disasters. In more recent years, NIST has done forensic work on the World Trade Center collapse on September 11th and created standards for the use of biometric technologies by the police.

Standards work is urgently needed on artificial intelligence, where the public does not have access to reliable information about the safety, security, and accuracy of AI systems— as I pointed out in an article for Nature last summer. To illustrate, consider the challenge of recording incidents when AI systems fail. At the moment, the state of the art incident reporting systems (OECD, AI Incident Database) rely entirely on journalism, with no standard information required of all incidents (see NIST’s report on this issue). As a result, incident reporting systems can’t reliably tell us basic things like what went wrong, how it happened, or why it happened. And that’s just one area of in the larger challenge of AI safety.

We have a lot of important work ahead of us.

Why Leadership in the US AI Safety Institute Consortium Matters

Test devices like this one in the 19th century made it easy for anyone to test the quality of milk, and enabled policymakers to enforce food safety laws. AI needs similar standards that can ensure that AI is trustworthy and that people can report when systems aren’t working. (Image CC-BY-NC-SA: Science Museum Group.)

By joining the NIST consortium on AI safety, Cornell is contributing to the urgent needs of millions of Americans affected by AI. CAT Lab and others at Cornell have pioneered techniques to test AI safety, evaluate the fairness of decision-making systems, and analyze the compliance of AI firms with transparency requirements. We are also training students who can take leadership roles in emerging professions and public-service roles dedicated to AI safety. 

Cornell University has been strengthening the safety and reliability of American business since our founding in 1865, from George Chapman Caldwell’s work to reduce infant mortality with scientific standards for milk safety and Hugh DeHaven’s pioneering 20th century work on automotive safety testing.

I’m grateful to Krystyn Van Vliet, Thorsten Jaochims, Natalize Bazarova, and everyone else at Cornell who worked together to assemble develop our relationship with NIST- and I’m excited to see NIST recognize the contribution that Cornell can make to the safety of artificial intelligence in the 21st century and beyond.