Trust & Safety Regulation Discussion at TrustCon 2022
September, 2022
How can we put emerging legal issues affecting platform Trust and Safety Teams into the long arc of tech regulation? Today’s panel on regulatory issues at TrustCon, “I am not your lawyer,” includes Cathy Gellis, Henry Lien, Alex Feerst, Kate Klonick, and Daphne Keller.
Cathy Gellis (@CathyGellis) is a lawyer whose legal work includes defending the rights of Internet users and advocating for policy that protects speech and innovation. Henry Lein is Deputy General Counsel at Clubhouse, where he works on a wide range of legal issues, including advising the company on intermediary liability. Alex (@alexfeerst) leads Murmuration Labs, which works with leading tech companies and web3 projects to develop online trust and safety policies, software products, and operations. Kate (@klonick) is an Associate Professor of Law at St. John’s University Law School and an Affiliate Fellow at the Information Society Project at Yale Law School and a Nonresident Fellow at the Brookings Institution. Daphne (@daphnehk) directs the Program on Platform Regulation at Stanford’s Cyber Policy Center, and was formerly the Director of Intermediary Liability at CIS. Her work focuses on platform regulation and Internet users’ rights.
Managing the Risks of Censorship from Content Filtering
Kate opens by asking: as countries pass laws requiring tighter turnaround, how can platforms avoid making decisions so fast that they lead to censorship?
Alex Feerst reflects on the recent EU terrorism law ago that provides platforms with only one hour to make content take-down decisions. Rather than waiting to scramble to field these issues, Alex suggests that teams reach out to the reporting entities and provide them with a way to submit requests so that they get more attention. Alex says that platforms do try to balance their need to remove content while trying to preserve as much as possible. But when laws become less and less realistic, they seem engineered to force platforms to carry out over-removal. Thus far, platforms have bent over backwards, he says, perhaps enabling regulators to set unrealistic expectations. Might platforms be able to provide people who face takedowns with contact information for lawmakers, so those lawmakers also feel the concrete effects of what they’re requiring platforms to do?
Daphne Keller remarks that the EU terrorism law participates in a long tradition of requiring platforms to take down bad things. She contrasts that with another school of thought, represented by the Digital Services Act, that says that platforms are violating rights by taking down too much. Meanwhile in Europe when companies like Google are required to notify people about things like this, they are fined for sending those notices. This represents two different lawmaking roles that haven’t been harmonized, she says.
Cathy Gellis observes that conversations are happening on two timescales— the firefighting timescale and a forest management timescale. In the short term, a platform might do things quickly because that’s what regulators are requiring, but platforms do also attend to the longer-term.
Kate asks Henry how platforms comply with laws like this on a live audio app. It’s a hard problem, Henry says, and notes that he’s speaking in a personal capacity. First, you try to identify the biggest risks and tackle them. If a platform can identify individuals and take action there, that can also be more efficient than trying to monitor everything.
Making Sense of Recent and Upcoming Regulations
Kate asks Daphne to summarize the EU Digital Services Act, how it compares to the GDPR, and what effect it might have on platforms. Daphne says that platforms with more than 50 employees need to start thinking about the Digital Services Act now. Daphne calls it the “DMCA on steroids.” The DSA includes a formal set of requirements for the fields needed for companies to handle reports of violating content. The system also includes a set of notification requirements, an appeals process, and the opportunity to take the case to a third party if they disagree, and the platform will be expected to pay for it. Very Large Online Platforms (45 million users) have a separate set of requirements as well. Daphne explains that this will require ramp-up, new employees, and new organizational processes.
Alex tries to summarize the work required of platforms to comply with the transparency requirements of the Digital Services Act. One way to think about the DSA’s requirements are like “Lumen for all” – and they’re meant to be sent as a zipfile or API to third parties to support transparency research. Providing third parties with individual-level decisions for transparency purposes will require a big change for many platforms. Companies will have to make decisions about what to record, as well as what to redact. Daphne says it should be possible to look at the DSA to determine what companies need to do to comply. When platforms send notices to users, they will be required to send that information to the government. She thinks platforms and the European Union will be difficulties with redacting personal information, as well as challenges creating the database.
Kate asks Cathy how much platforms operating outside of the US need to think about Section 230, a US law that governs what kinds of liabilities platforms face for the content they publish. Even with recent policy debates and state laws, Section 230 is still in lawyers’ quivers, says Cathy. In the meanwhile, Congress keeps threatening it, and U.S. states are doing many things and running into legal problems with the First Amendment and Section 230. Cathy tells us that we can expect a big legal test to learn whether states will be successful at creating more local regulations or whether federal protections under Section 230 will still apply. Cathy observes that platforms have been laissez faire about what they allow, but if users are unhappy and have no market power to change decisions, they will run to regulators and those regulators will need to respond. Poking that bear, says Cathy, is a hard decision.
In Europe, says Cathy, regulators don’t want to hear about US laws. She does suggest that platform representatives can tell a story that uses the logic of Section 230 without naming it. Kate observes that countries in the majority world are also facing these issues, even if they don’t have the same bargaining power with companies.
An audience member asks for opinions about recent European laws requiring mandatory scanning for certain kinds of content. Daphne tells us about the Max Mosley case, which was when she first started thinking about these issues. She says that there hasn’t been much discussion about these developments mostly because people working on these issues are stretched thin. She also notes that the fight over surveillance, filtering, and freedom of expression got lost in the copyright directive, and there have been two cases showing that filtering violated free expression. But rather than roll back content filtering requirements, European courts came back and just told platforms to do things in ways that didn’t violate freedom of expression.
Trends in Automated Content Filtering Regulations
When discussing automated content moderation, Alex observes that it might be easier to make progress on transparency—supporting more accountability across the supply chain of systems for data collection, algorithm creation, and system performance. If the automated content moderation supply chain were more transparent, it would be more possible to manage and reduce the impact on freedom of expression in the early stages.
Henry notes that governments are also getting even more involved in automated content moderation — South Korea has recently passed laws requiring platforms to use government algorithms trained by government databases. He notes that when governments create obligations for all platforms, they are taking away the ability of smaller players to innovate because they need to meet high bars set by regulations designed to target large companies like Google and Meta.
Another audience member asks how laws like the Digital Safety Act might support the professionalization of Trust and Safety. Daphne observes a close alignment between a focus on systems (as described in a talk by Del Harvey) and the big picture ideas behind UK regulation and some parts of the DSA. She does start to get nervous about the decisions made by regulators and platform professionals behind the scenes, especially when they imagine content filters as the solution to managing harm.
Alex observes that speech-related regulations are difficult especially when they have unintended consequences. After learning the consequences of trying to regulate speech directly, governments are trying to regulate speech indirectly through “bankshot” methods such as making certain content less visible. The challenge for regulators, he says, will be to understand the benefits and risks of those even less direct interventions.
Trusting Platforms
Cathy says she often wants to tell regulators to have a light touch. It’s not regulators’ job to solve problems when they are so far removed from the issue. Platform staff care a lot, she says. So regulators should provide platforms with the most space possible to do their job.
What can Trust and Safety staff do when their employers make irresponsible decisions or avoid regulations? As the session closes, Daphne encourages people at the event who are frustrated with decisions made by their employers to consider whether they want to be whistleblowers and cultivate relationships with regulators they trust — should that be a useful step to take.