Facebook’s election content moderation failures in Brazil are a warning for US midterms

Erica Hellerstein

 

Let’s play a game called “spot the election disinformation.” We’ll start with: “The electronic system is untrustworthy. We need printed paper versions of everyone’s votes so they count! We can’t accept the vote until they change the system.”

Sound off? If you answered yes, congratulations. You’re a step ahead of Facebook’s content moderation system. New research from Global Witness has found that the platform appears woefully unprepared to rein in election-related disinformation ahead of Brazil’s contentious upcoming presidential race. Fears are spreading that incumbent President Jair Bolsonaro will refuse to concede the election if he loses, setting the stage for a possible military confrontation.

The brash, far-right Bolsonaro has consistently been trailing leftist rival and former president Luiz Inácio Lula da Silva in the polls. Bolsonaro has called Lula a “bandit” who could only win the presidency through electoral fraud, and has dramatically forecast just three outcomes for his political future: prison, death, or electoral victory. Such comments have left Brazilians understandably on edge about the peaceful transfer of power if Bolsonaro loses: concerns of a coup and authoritarian backsliding are widespread, reviving memories of the country’s brutal and violent military dictatorship from 1964-1985. 

This would all be volatile enough without social media. But nothing quite revs up a demagogue on life support like the possibility of spreading mass lies online.  As we have previously reported, Bolsonaro has built a social media juggernaut numbering tens of millions of supporters. Election-related disinformation played a significant role in the lead-up to his victory in 2018. With elections scheduled for October 2, Facebook—the country’s most popular social media platform—has inevitably emerged as a key battleground in the fight over Brazil’s political future. 

Meta, Facebook’s parent company, has pointed to its election integrity efforts in Brazil as a company priority. So Global Witness put Meta’s claims to the test, submitting ten Portuguese-language Facebook ads that explicitly violated the company’s policies on election advertising. 

“We were putting really obvious election disinformation out there and expecting it to get blocked,” Jon Lloyd, senior advisor at Global Witness, told me. Except it didn’t. According to Global Witness, Facebook approved all of the false election ads, “failing in its efforts to adequately protect Brazilians from a disinformation nightmare.” (In response, Meta said it “prepared extensively for the 2022 election in Brazil” and is “deeply committed to protecting election integrity in Brazil and around the world.”) 

Global Witness pulled the ads before they could see the light of day, but the fact that they could sneak past moderators despite the alleged resources the company has poured into securing Brazil’s election integrity raises serious questions–and not just for electoral outcomes there. There are a few ways to interpret these results. One is that Meta places less attention and fewer resources on non-English language content moderation, which has already proven to be an issue for the company. 

But when I asked Lloyd if he thought the company would have flagged the ads if they were submitted in English, he was skeptical. “My honest answer is no,” he said bluntly. “I don’t think they would have caught it.” He pointed out that Facebook failed to detect English-language ads submitted as part of an investigation into hate speech advertisements in Kenya. “Everything got approved there as well,” Lloyd added. “I think it’s an enforcement issue. They’re not investing enough in the actual enforcement of their own policies.”

If true, the implications could extend well beyond Brazil. A month after the Lula-Bolsonaro race, Americans will head to the polls to cast their ballots in the 2022 midterm elections. Recent reporting suggests that U.S. election fraud conspiracies have not died down but instead are flourishing across social media networks, including Facebook. With Brazil’s election, Meta had the chance to prove its critics wrong, showing off all the progress it has made to ensure election integrity. 

Instead, it seems that they’re doing the opposite — and the stakes couldn’t be higher for voters from Rio de Janeiro to the Rio Grande Valley.

IN GLOBAL NEWS:

Add El Salvador to the list of countries where some mild criticism of your government on social media can land you behind bars. Police recently arrested a 38-year-old man for the crime of saying mean things online about the country’s authoritarian president. The man, Luis Alexander Rivas Samayoa, posted a photo of President Nayib Bukele’s family on an anonymous Twitter account that frequently criticizes the government. A few hours later, police showed up at Samayoa’s doorstep and confiscated the phones and passwords of everyone with him after they denied posting the photo. Samayoa admitted to publishing the picture after the authorities threatened the entire household with arrest. “The simple fact is the only crime he’s committed is being critical of the abuses that are being carried out by this government,” explained Samayoa’s brother, who reported the incident to the media.

Arbitrary detention is increasingly common in El Salvador. Bukele, who once called himself the “world’s coolest dictator,” is waging a brutal campaign against organized crime that has left dozens dead and tens of thousands of alleged gang members behind bars, resulting in widespread human rights violations, according to Amnesty International. But it’s not just El Salvador where authoritarians are weaponizing Twitter to crack down on independent speech. As we previously reported, police in India arrested a comedian for tweeting a joke that offended the government, while the Saudi government recently sentenced the activist Salma al-Shehab to a jaw-dropping 34 years in prison after she posted tweets supporting women’s rights, a punishment handed down under the kingdom’s 2007 “Anti-Cyber Crime” law. 

In Indonesia, the latest draft of the controversial Criminal Code poses steep risks to digital speech and press freedom ahead of 2024 elections. A July draft of the code laid out 14 articles the government could use to put journalists and critics behind bars. The most recent draft includes at least 19, according to Indonesia’s Alliance of Independent Journalists. If passed, the code would make insulting the president and vice president a crime punishable by up to five years in jail, as well as one-and-a-half years for offending the government. We recently reported on the potential revisions to Indonesia’s Criminal Code, in addition to a slew of new regulations that experts fear could turn the country into one of the world’s most repressive internet governance regimes.

On the other side of the world, a little-known data broker is helping police departments across the U.S. build a sweeping digital dragnet. The Virginia-based company, Fog Data Science, collects location data from third-party phone apps and then sells it to state and local law enforcement agencies, according to new research from the Electronic Frontier Foundation. In promotional materials, the company claims that it has “billions” of data points on over 250 million devices that law enforcement could use to track people’s movements over time. “This data could be used to search for and identify everyone who visited a Planned Parenthood on a specific day, or everyone who attended a protest against police violence,” EFF wrote, creating a “staggering” potential for abuse. Chief among critics’ concerns are how data brokers might help police prosecute abortion seekers following the Supreme Court’s decision to overturn Roe v. Wade. Pressure is mounting in Washington to crack down on data brokers, and Congress is currently debating a first-of-its-kind federal privacy bill that, if passed, would place significant regulations on the industry.

 This week’s newsletter is curated by Coda’s staff reporter Erica Hellerstein. Liam Scott and Rebekah Robinson contributed to this edition.