How do the ways we talk about online safety shape the types of policy solutions companies design, the communities who get to participate in the governance of technology, and the kinds of knowledge and expertise we value in this conversation? As the field of trust and safety professionalizes and a new industry of companies emerge to sell its expertise back to platforms, it is important to reflect on the framings, metaphors, and terminology that the field adopts as foundational.

Take, for example, an interesting tension in how different experts talk about disinformation. In January, the World Economic Forum declared that disinformation poses the top global risk over the next two years, based on the opinions of more than 1,400 experts. Yet more recently, a group of disinformation and national security analysts argued that exaggerating the threat of disinformation can actually cause more harm than downplaying it, by undermining public trust and promoting conspiratorial thinking. Both perspectives agree that disinformation can harm democratic societies, but they disagree over the way we should frame the threat. 

Why does that difference matter? Because the language we use to describe an issue transforms the social realities of that issue. Scholars have applied this idea to security policy to argue that insecurity is not an objectively occuring state in the world, but a particular framing of an issue applied through language.1 An issue is “securitized” through the act of describing it as an existential threat to the survival of some collective (society, the nation, the state, etc.) This has downstream effects on our response to this issue, enabling the securitizing actor to take drastic measures to counter this threat outside of the realm of normal politics. Once the idea of an existential threat is widely accepted, certain social commitments become fixed and some policies appear as common sense while the social possibility of others is closed off. 

While the securitizing actor is often the state, securitization theory tells us that other actors can make this move as well and that analyzing how and when certain actors are able to do so is important for understanding the governance of the issue. This brings us back to the question of how to talk about the threat of disinformation, which, from one angle, could be viewed as a debate over the extent to which the information sphere should be securitized. 

Describing disinformation as the number one global risk to humanity (ahead of things like climate disasters, interstate armed conflict, and pollution) is one example of a securitizing move. It elevates the issue to the status of an existential threat to a collective and justifies a strong policy response. But what types of policies and design solutions are prioritized by this response? Whose perspectives and forms of expertise are elevated, and whose are excluded? What other framings could we apply to disinformation (and other issues of digital safety more broadly) and what policies and forms of expertise would those framings promote?

These are some of the questions I am exploring in my dissertation. They are also questions that the field of trust and safety deals with constantly. I’m especially interested in how actors in the emerging ecosystem of vendors that sell trust and safety and intelligence services to platforms think about this problem and whether that influences the kinds of tools they build. Do these vendors see themselves as engaging in securitizing discourses or are they rather responding to already securitized threats?

My hope is that by analyzing the work that these securitizing moves perform in digital governance, we will better understand the actors who get to make the move, the space for particular policies and design choices that it opens up, and the perspectives and voices that may be left out of the conversation as a result.

  1. See, for example: Balzacq, T., Léonard, S., & Ruzicka, J. (2016). ‘Securitization’ revisited: Theory and cases. International Relations, 30(4), 494–531. https://doi.org/10.1177/0047117815596590; Buzan, B., Wæver, O., & Wilde, J. de. (1998). Security: A New Framework for Analysis. Lynne Rienner; Wæver, O. (1996). European Security Identities. JCMS: Journal of Common Market Studies, 34(1), 103–132. https://doi.org/10.1111/j.1468-5965.1996.tb00562.x ↩︎