Writing in Science last week, Seth Lazar and Alondra Nelson argued that “Only a sociotechnical approach can truly limit current and potential dangers of advanced AI.” What would that actually look like in practice?

This week, I published a new article in Scientific Reports that demonstrates how affected communities can organize for change in ways that advance basic science — in this case reducing the spread of unreliable news by platform recommenders.

In the paper, I describe and demonstrate how collective, good-faith human collaboration can steer recommender systems in beneficial ways, without communities needing to understand the underlying code of the system—filling out a key link in our socio-technical understanding of adaptive algorithms.

Most importantly, I hope this paper can be a helpful Rosetta stone for computer scientists and social scientists working with affected communities on AI questions— to help them explain the work in ways that scientists consider legitimate. Let me explain…

Putting a question on the map of science

If you’ve heard about this study before, it’s because I completed the experiment in 2017 and even won an award from FastCompany for it. So why did it take so long to publish? The reason is that although the methods and findings are solid, peer reviewers in computer science and social science repeatedly struggled to see the question as legitimate or meaningful.

To publish the article, I needed to convince people of the question— not just the answer. Putting a socio-technical question on the map of science has been the most interesting and challenging project of my research career so far.

It’s *hard* to publish socio-technical research. When I talked to other scholars such as the authors of this article in PNAS on stewardship of global collective behavior, they also struggled to publish outside of specialized venues in Communication and Human-Computer Interaction that have studied these questions for decades. Over time, I created a bingo card with reasons provided by CS and social science reviewers for why these questions aren’t science and shouldn’t be published.

  • Generalizability: By definition, it’s not science, because the algorithm could have changed since then
  • Generalizability: By definition, it’s not science or engineering, because the behavior of adversarial actors could have changed since then
  • Novelty: This isn’t novel because engineers understand their algorithms and can already easily predict what they will do in the world
  • Technical mechanisms: the answer is insufficient unless you explain everything about how the algorithm works
  • Technical mechanisms: showing or describing the code is insufficient because code is a poor explanation for how the system works
  • Psych/soc mechanisms: it’s incomplete unless you fully explain how theories of human behavior contribute to this dynamic
  • PR: This is different from what companies say about their products, so it can’t be true

After hearing the struggles of others, I was excited to realize that my work could contribute even more to science by opening the door for others to ask similar questions. Achieving that involved a *lot* of personal growth and groundwork over the next six years, including an article in Nature (thanks CASBS!) and a review piece for the SSRC (thanks Just Tech!)

A Rosetta Stone for Socio-Technical Science of Human-Algorithm Behavior

To convince reviewers from computer science and the social sciences alike, I needed to create a multi-faceted argument that would be legible to multiple sub-fields and still form a coherent whole. This paper offers two guides that I hope will be helpful to others:

  • A four-step model of mutual influence between human and algorithm behavior:
    • Algorithm mechanisms -> interaction design
    • Interaction design -> psychology
    • Psychology -> collective behavior
    • Collective behavior -> Algorithm mechanisms (this is where my new paper adds knowledge)
  • A translation guide to socio-technical hypotheses in psychology, computer science (HCI, RecSys, Infosec), and community science

I’m far from the only person doing this translation work. Since submitting this article, a group of researchers have published a new Handbook of Human-Machine Communication that is far more comprehensive than any single paper can be. I hope that if we all keep working together, we can increase the pace of science to match the urgent need.

And don’t worry; this deliberate academic process hasn’t held up this paper’s practical impact. As soon as I had results, I shared them with my collaborators at r/worldnews. The preliminary findings have influenced many crowd-sourced fact-checking projects, have been taught in classrooms around the world, and are even key details in multiple popular books on AI and social media. Even more wonderfully, more than one person has started a PhD after getting involved in or reading about the paper, which still fills me with an astonished happiness that’s too precious for words.

So I’ve accepted the lesson I learned from this project about the pace of science. As an academic, my job is to create knowledge for the next generation, and science is ultimately a journey of persuasion. If you can convince other organized skeptics to see things your way, then your work might get passed on through time.

I am proud (and relieved) to see this paper into the world. And I’m continuing to develop new studies with communities around forecasting, intervening, and influencing collective human-algorithm behavior. Most of all, I hope this project can serve as a pathway for others to inform, surprise, and delight us with research that makes a practical difference in people’s lives and advances basic science.


This study was part of dissertation research advised by Ethan Zuckerman at the MIT Media Lab and MIT Center for Civic Media. I am deeply grateful to the r/worldnews moderators and the tens of thousands of community members who supported this research by participating in this study.

I am also grateful to Merry Mou, who provided software engineering support, to Elizabeth Levy Paluck, Robin Gomila, and Lucas Wright for feedback on drafts, and to Sean Taylor, Martin Saveski, and Shoshana Vasserman, who offered helpful advice on estimating effects on the behavior of ranking aggregators. Support for writing and analysis was funded by the Templeton World Charity Foundation, the Siegel Family Endowment, and the Annenberg Foundation. I am also profoundly grateful to colleagues at the Berkman Klein Center for Internet and Society, and the Center for Advanced Study in the Behavioral Science for encouraging me in this work <3