Meta doesn’t allow violent speech — except when it does

Ellery Roberts Biddle


When is it OK to call for someone’s death on Instagram? If you’re Palestinian or writing in Arabic, the answer is obvious — never. Meta has admitted in the past that even words like “resistance” have triggered its alarms in the Palestinian context. But elsewhere, it’s less clear-cut.

Russia’s full-scale invasion of Ukraine in 2022 triggered an unusual move by Meta: In early March of that year, Facebook and Instagram users were suddenly allowed to threaten violence against Russian soldiers or Russian President Vladimir Putin. For users in Eastern Europe, the company temporarily loosened longstanding prohibitions on violent speech, thereby openly aligning itself with the Ukrainian side.

In a blog post explaining the company’s rationale, Meta’s President for Global Affairs Nick Clegg wrote: “The fact is, if we applied our standard content policies without any adjustments we would now be removing content from ordinary Ukrainians expressing their resistance and fury at the invading military forces, which would rightly be viewed as unacceptable.”

The decision wasn’t surprising at the time. Western governments and publics were quick to line up in opposition to the war, and Facebook was already non grata in Russia, having been blocked inside the country. Although Russian officials later cited Meta’s rule change in its decision to label the company an “extremist” organization, outside Russia, the move had little consequence for Meta’s reputation.

But it left people working on these issues in other parts of the world wondering why Meta had Ukrainians’ backs, but not their own. Mona Shtaya, a Ramallah-based digital rights researcher who works with the Tahrir Institute for Middle East Policy in Washington, D.C., was one of them.

“When there was a political impetus to protect people in Ukraine, they protected them. You can’t do that in Palestine. I think they don’t have the will to do that in our case,” she said. “It is just devastating. Meta is controlling who gets the power and who gets to speak out against the occupier on their platform.”

Shtaya acknowledged that the war in Gaza is indeed “a different context.” While both scenarios feature heavy imbalances of power and resources, and a dominant occupying force, other details that really matter — the politics, culture, regional dynamics in play and geopolitical fallout they’ve triggered — are truly distinct from one another. It’s also worth noting that Ukraine isn’t the only place where Meta has made a context-specific exception like this. In 2021, amid protests in Iran over the country’s economic crisis, the chant “Death to Khamenei,” referring to Iran’s supreme leader, could routinely be heard on the streets and in videos posted on Instagram — until Meta took them down, citing its general prohibition on calls for violence. When users pushed back and explained the sentiment of the slogan (Iranian-Canadian scholar Mahsa Alimardani said it was like shouting “Fuck Trump”), the company reconsidered. As the protests peaked, posts or audio that contained “Death to Khamenei” in Farsi temporarily became permissible speech.

But in some ways, the virtual lives we lead on Meta’s platforms are intended to transcend context. Instagram and Facebook are governed by a supposedly hallowed set of rules (i.e., the Community Standards) that “apply to everyone, all around the world, and to all types of content.” Regardless of where you are, what you believe or whose side you’re on, the rules are supposed to stay the same. But the company’s actions in select cases demonstrate that it can and will tip the scales of speech around an active conflict if the conditions are right. This becomes troubling not only when looking at situations of war or social unrest, but also with this being a significant year of elections. Where else might Meta feel comfortable giving a slight advantage to one political actor over another? Or what if it fails to intervene before violence strikes when it could really make a difference? Think of Meta’s reluctance to act against the planning of the insurrection on January 6, 2021, at the U.S. Capitol, or its promotion of calls for the ethnic cleansing of Rohingya Muslims in Myanmar. The consequences of these moves can be life or death.


Baby’s first election flub. While OpenAI is dipping its toes into the murky waters of military contracting — the company now has a cybersecurity contract with the U.S. Department of Defense — its effects on elections are also top of mind this week. After The Washington Post reported that OpenAI’s ChatGPT was being used to power a chatbot that supported longshot Democratic presidential candidate Dean Philipps, the company shut off the campaign’s ability to use its technology, citing violations of its election rules. Setting aside OpenAI’s special ties to Phillips’ campaign, the incident makes one wonder: How will OpenAI moderate political actors’ use (or abuse) of its tools as we head into a year of consequential elections worldwide?

I put the question to Integrity Institute fellow Alexis Crews, who previously worked on governance at Meta. “For plugins and applications, OpenAI should establish strict guidelines, especially regarding election and candidate-related content,” she wrote. “While the public rightly expects OpenAI to set up safeguards against harmful content, achieving this is challenging, particularly for small teams with limited resources.” I know OpenAI isn’t nearly as big as Meta or Google, but as it skates towards a $100 billion valuation, I wonder if it might put more resources towards mitigating harms to democracy.

Sri Lanka’s Online Safety Bill could “pave the way for a dictatorship,” according to opposition member of parliament Eran Wickremeratne. Yesterday, parliamentarians passed the bill, which criminalizes online harassment, data theft, “coordinated inauthentic behavior,” “threatening, alarming or distressing statements” and plenty else. It will require tech companies — including the biggest players, who have opposed the measure — to institute 24-hour turnaround times on content removal requests. And it will empower a five-member commission, appointed by the president, to issue censorship orders. Critics — and there are many — have argued that the commission will allow the president’s cronies to tilt online discourse in their favor, an especially troubling prospect with presidential elections set to be held later this year.

France orders Amazon to pay up over warehouse worker surveillance. Amazon’s French subsidiary is facing a $35 million fine after the national data protection authority investigated the systems used to monitor worker productivity in the company’s massive fulfillment centers. Citing the European Union’s General Data Protection Regulation, the French regulator found that the systems capture and store too much data about workers at too frequent a rate, calling it “excessively intrusive.” While the case isn’t driven by labor rights concerns — an issue at Amazon that we’ve covered in the past — it still underscores the ways in which surveillance has become part of the labor experience in the tech sector. Amazon is threatening to challenge the order.


  • Two people who really know surveillance — acclaimed cryptographer and New York Times best-selling author Bruce Schneier and privacy law extraordinaire Albert Fox Cahn, who runs the NYC-based Surveillance Technology Oversight Project — have some insights on how chatbots could promote social norms that might make us ruder or just plain boring. Read this fun collaboration in The Atlantic.
  • At Coda, we’ve long covered the dark side of urban surveillance, from Medellín to Moscow. But here’s an unusual use of surveillance cameras: Artists in the Dutch city of Utrecht have turned 22 CCTV cameras into an installation that hangs above the water inside a tunnel used by the city’s many cyclists. The cameras light up with the ebb and flow of bicycle traffic and project a poem by British poet Sophia Walker that reflects on people’s fears and motivations related to the search for a place to call home. If, like me, you can’t make it to Utrecht, you can learn more about the project here.