Do Social Media Platforms Have Effective Ad Policies? We Audited Google and Facebook
December, 2018
This article was written by J. Nathan Matias, Austin Hounsel and Melissa Hopkins. It was originally published on The Atlantic on November 2, 2018. It is re-published here in accordance with our joint publishing agreement.
“I asked them to help me understand what part of it was political,” says Brower. “There was nothing political. They never got back to me.” Facebook refused her appeal and charged Brower for the ad. Now, Brower doubts she will advertise on Facebook again.
This year, online advertising platforms including Facebook, Google, and Twitter are reviewing millions of ads to protect American elections from unlawful influence and to disclose legitimate campaign activity. These platforms are also making mistakes. This election, Facebook has been accused of wrongly blocking ads for community centers, news articles, veterans pages, and food advertising.
If corporate political filters make enough mistakes, they could substantially impact American civic life. Public holidays, community centers, and news conversations knit together the civic fabric of democratic life, enabling Americans to understand each other and work together despite our differences. Each time a platform wrongly restricts a community announcement, this civic fabric weakens, with fewer people honoring American veterans, fewer relationships among neighbors, and less common understanding at a time of growing polarization.
How common are these mistakes and are they politically biased? To find out, our team of academics and U.S. citizens submitted hundreds of non-election ads to Facebook and Google, recorded their responses, and analyzed the results. Because platform election-advertising policies are so large and complex, we had to write a software program to direct our eight-person human audit team, which also included Ben Werdmuller, Jason Griffey, Chris Peterson, Scott Hale, and Nick Feamster.

By reviewing every ad before it’s published, tech firms are creating some of the largest and fastest policy-enforcement systems in human history. Last quarter, Google’s parent company, Alphabet, reported $28.5 billion in advertising revenue. Facebook reported earning $13 billion. According to Facebook’s transparency website, the company identified over 33,000 political ads per week in the site’s first two months. At most, political ads accounted for than 1 percent of Facebook’s quarterly advertising revenue, according to data from NYU researchers led by Laura Edelson and Damon McCoy.
If these public figures are correct, Facebook could be reviewing millions of ads per week for political content, even if only a small percentage of ads are related to the election. Google, which earns more than twice the ad revenue of Facebook, could be reviewing even more. While broadcasters and print publishers also review advertising content, digital ads are micro-targeted, A/B tested, published in real time, and continuously adjusted.
Where else is any organization making a similar number of judgments? The College Board only reviewed around 7 million SATs last year. Women receive roughly 25 million mammograms, and 25 million Americans were stopped by the police last year. In 2013, Google processed 4.5 million copyright takedown requests per week on average, but those were mostly automated complaints. Perhaps the only higher-volume detection system in American life is the TSA, which screens 6.2 million airport baggage items every day.
Facebook detects political ads through machine-learning algorithms and human reviewers. According to a Facebook spokesperson, the company uses “automated, and in some cases, manual review to check the ads against Facebook’s Advertising Policies.” They reported that approximately 3,000-4,000 people currently review ads related to politics or issues.
People and algorithms often make mistakes, even when they make accurate decisions on average. Mammograms have false positives. The SAT doesn’t always predict college success. Police are sometimes biased, and copyright-enforcement bots also make mistakes. A 2016 study led by Stanford law professor Jennifer Urban found that 28 percent of attempts to remove online content for copyright violations are flawed in some way, potentially causing tens of millions of videos, tweets, and search results to be mistakenly censored every year.
When Facebook decided that Jo Brower’s charity fundraiser was an election-related ad, the mistake was influential. The company’s decision could have reduced the number of donations to wounded veterans. If Facebook and other platforms make enough of these mistakes at scale, they could meaningfully reduce participation in Veterans Day memorials and fundraisers across the country.
Even though Brower’s nonpartisan ad didn’t mention a candidate or election, it’s possible that Facebook interpreted her fundraiser as an ad about what the company calls “issues of national importance.” Campaign ads and issue ads are two central categories at the center of debates over advertising in American elections.
Platform policies toward campaign ads are shaped by U.S. legal definitions. Ads that are designed to influence the outcome of federal elections are regulated by the Federal Election Commission (FEC). The FEC enforces limits on campaign contributions, requires disclosure from larger donors, and prohibits anyone but U.S. citizens and green-card holders from supporting campaigns. Facebook, Google, and Twitter all require advertisers to seek authorization before they can publish federal campaign ads.
Issue ads are harder to define. “The Federal Election Act doesn’t recognize issue ads as a category,” according to Vivek Krishnamurthy, Counsel at Foley Hoag LLP and Lecturer on Law at Harvard Law School. “The distinction between ads that discuss public issues from ads that advocate for or against candidates is a line that comes from the First Amendment.” Because the First Amendment applies to government restrictions on speech, companies are free to create more restrictive practices.
While there’s no agreed-upon definition of issue advertising, Facebook, Google, and Twitter all added issue ads to their policies after learning of attempts by the Russian Internet Research Agency (IRA) to influence the 2016 election. The IRA used online platforms to promote issues like Black Lives Matter, anti-Muslim sentiment, and LGBTQ rights. Each company now reviews every advertising attempt and requires advertisers to obtain authorization before publishing ads about “politics or issues of national importance” (Facebook) and “legislative issues of national importance” (Twitter). Google’s policies cover political-issue advocacy, and they announced in August that they were working on tools to detect issue ads and campaign ads in state elections.
These corporate policies filter who is allowed to publish election-related ads while also disclosing those funders to the public. If a platform decides that an ad is election-related, they require the advertiser to document that they are eligible to publish election ads. After platforms grant authorization, advertisers are allowed to publish the ads, and platforms disclose the funder. If the advertiser decides not to authorize or fails the process, the ads are not published and no record of the attempt is disclosed to the public.
To test and compare mistaken enforcement by Google and Facebook, our research team wrote software to create ads that look like ones that Facebook has already mistakenly blocked. The software created product ads for music albums that share a name with election candidates. The software also created ads for national parks and Veterans Day parades that could be mistaken for issue ads. We then compared the differences in mistaken-enforcement rates for parks and parades versus products, left- or right-leaning mistakes, and federal versus state elections.
We also tested whether platforms mistakenly enforced their policies differently for U.S. citizens at home or abroad. Last year, then-Senator Al Franken criticized Facebook for accepting advertising payment in rubles, even though the FEC places no currency restrictions on campaign ads. One group of our ad posters were U.S. citizens using dollars within the United States. A second group of U.S. citizens posted from international internet locations with non-U.S. currencies.

We posted a total of 477 ads to Facebook and Google from September 17 through October 10. Facebook prohibited 10 of our 238 ads (4 percent), citing its election policies. Google didn’t prohibit any of our 239 ads (our report includes full results and data). The following chart shows both platforms’ ad-publication rates for each combination of non-election ads: type (parks and parades or products), election level (state or federal), and the political leaning that a company might misinterpret (right or left).
Most of the ads that Facebook prohibited were for national parks or Veterans Day parades. One prohibited ad advertised a music album that shared a name with a candidate. Facebook prohibited 5 percent of our ads for Veterans Day gatherings. Facebook also prohibited 18 percent of national park ads linking to government websites. When we appealed some of these decisions, Facebook’s reviewers reversed them, confirming our belief that they were initially mistakes.
Maybe these mistakes aren’t so surprising. Overall, Facebook prohibited 11 percent of our park and parade ads and 1 percent of product ads that included a candidate name, a difference of 10 percentage points. Political ads often mention imagery and values common to many Americans. Machine-learning models designed to detect these political ads could easily learn these features and systematically prohibit information and ideas central to nonpartisan American civic life. Whatever the reason, the company prohibited fewer product ads.
“As happens with any new process, we’re making mistakes as we learn,” said a Facebook spokesperson in response to our investigation. “These instances, however, are a small percentage of our efforts to bring more transparency to political and issue advertising.” The spokesperson stated that “We routinely evaluate ad review and take steps to improve the machine-learning model as well as our reviewer-training material.”
We’re not sure how to make sense of Google’s response to our ads. Google’s transparency report listed roughly 183,000 political ads from May 31 through October 15, less than one-fifth the rate of weekly Facebook ads reported by the NYU team. Maybe fewer campaigns are advertising on Google and YouTube, despite their larger market share. Maybe Google reports fewer political ads because their policies are less broad than Facebook’s. Google has expressed an intent to permit state campaign ads and political-issue ads “with restrictions,” but those restrictions may not yet be in place. Google may also have been less successful at detecting political advertising than Facebook, a question our research cannot answer.
Whether Google’s policy enforcement is more narrow, more lax, or more accurate than Facebook’s, the company did correctly publish each of our non-election ads, in line with their own policies.
Advertising transparency could be good for elections. In the 2010 majority opinion for Citizens United, Justice Roberts argued that real-time online transparency of election advertising could help voters “give proper weight to different speakers and messages” and “hold corporations and elected officials accountable for their positions and supporters.” Even if transparency does improve the integrity of elections, platforms will always make at least some mistakes. If mistakes are common enough, false positives like the ones we observed from Facebook could have influential side effects on important parts of American civic life.
In St. Petersburg Florida, with the bicycle fundraiser for wounded veterans just a few weeks away, Jo Brower is still frustrated with what seems like an accusation of partisanship from Facebook. “I’m also promoting it other ways. I have nothing to hide.” Local veterans groups and the St. Petersburg Area Chamber of Commerce have promoted the fundraiser, and some riders have already registered. Yet none of Jo’s other ideas can reach as many people as Facebook can.
Platforms have good reasons to protect democracies from illegitimate attempts to influence voters. Those protections also have a cost. Without care, the greatest collateral damage from these protections could be the nonpartisan communities and conversations that divided societies most desperately need.
This article was written by J. Nathan Matias, Austin Hounsel and Melissa Hopkins. It was originally published on The Atlantic on November 2, 2018. It is re-published here in accordance with our joint publishing agreement.