How can the ecosystem of public-interest technology research manage the risks that threaten to destroy our credibility and prevent us from serving the public interest?

When community scientists, journalists, NGOs, and academics make discoveries that serve the common good, our work is sometimes inconvenient to companies and governments. The more this research matters, the more we have seen it come under attack by people who say it’s unethical.

In recent years, researchers have been threatened by companies, taken to court, and prevented from accessing data by actors that are uninterested in independent scrutiny. Powerful organizations sometimes appeal to the public’s fears of privacy and ethical violations. While some of these appeals are more credible than others, they have prompted restrictions on independent research.

Researchers have approached their work with methodological rigor and ethics practices, but they can’t manage these threats alone

Here at CAT Lab, we support affected communities to investigate the impacts of technologies in their lives through citizen/community science. So we watched these developments with concern. We know that independent researchers are creative and resilient, even if critics sometimes try to label independent researchers as unreliable or unsafe. At the same time, researchers do need better protections and support.

To shed light on this situation, CAT Lab has published a new report on industry-independent research. This report summarizes how journalists, community scientists, and NGOs contribute to the public interest. The report also investigates their ethics and privacy-related practices and needs. In this post, we summarize the main findings of the report, which is available below:

To study how independent researchers manage risks and threats related to ethics and privacy, we interviewed 14 researchers. We included journalists, civil society, policy, and community/citizen science researchers who conduct Internet research. The study was reviewed by the Cornell IRB. Based on what we heard, we categorized and ranked the risks/threats that researchers face and the practices they use to manage them. Based on that matrix, we developed a set of recommendations that we hope will help support our growing field.

Why independent research matters

First of all, we encountered many amazing stories about how independent researchers are serving the common good. To help people understand who does independent research and why it matters, we share inspiring case studies:

  • InsideAirbnb: From Summer Camp to Tech Accountability Data Service
  • BuzzFeed News: Uncovering the Role of Tech in Mediating Genocide
  • Turkopticon: Early Warnings of Platform Privacy Failures from Worker Organizers

How independent researchers manage risks and threats

Across the interviews, our report highlights risks (factors within the control of researchers that could shape trust in an individual project and potentially whole fields) and threats (efforts outside the control of researchers  to undermine trust in individual projects and the wider research endeavor) that independent researchers regularly face. Researchers also developed strategies for navigating these challenges:

  • Risks to communities impact people whose data is included in Internet research, such as privacy violations and exposure to harassment. Researchers mitigate these risks by treating data how they treat people and safeguarding data. 
  • Threats to researchers expose researchers and their teams to harms including harassment, attacks on reputations, legal threats, financial precarity, and even physical harm. Researchers have managed these threats by self-censoring and avoiding potentially contentious projects. However, most researchers have accepted these threats as the cost of research. 

Lessons from our research 

Threats to research affect independent research as a whole and include attacks on public trust in research and actions by firms to quash entire fields of research that hold them accountable. Researchers have approached their work with methodological rigor and ethics practices, but they can’t manage these threats alone. Therefore, we have identified the following lessons for key stakeholders: 

Lessons for policymakers

  • Journalists, civil society, and community scientists (not just academics) do critical research on technology and society.
  • Non-academics do have established, thoughtful practices around ethics and privacy, even if they don’t look like IRB — and it’s possible to develop guardrails that work for them.
  • Data access policies will fail in their mission unless they include journalists, civil society, and community scientists. 

Lessons for people supporting the research ecosystem 

  • Grow: field-spanning support that includes mentorship, funds diversity, equity, and inclusion, and institution-building.
  • Strengthen: ethics/privacy education and accountability mechanisms.
  • Protect: policy engagement, public relations, and mutual support to manage attacks, including: strategic public relations, legal advice and defense, and mutual aid networks.

Lessons for researchers

  • You are not alone.
  • You can help by taking stock and sharing more widely your current practices for managing risks.
  • Now is a good time to ask how we can strategically strengthen each other’s work.


We are grateful to all of the researchers we spoke to for this study, some of whom we have been permitted to name in the full report. We also thank the NetGain partnership for commissioning this study, and our collaborators on a sister study focusing on academic researchers led by Dr. Josephine Lukito.

We are also grateful for our colleagues at the Princeton University Center for Information Technology Policy, who co-hosted a workshop with CAT Lab in 2022 that helped inspire us to do this research, and to Aure Schrock of Indelible Voice for their editorial support.