The digital dimensions of Russia’s war, one year on

Ellery Roberts Biddle



Over the past year, the digital dimensions of Russia’s war in Ukraine have run the gamut from early efforts to re-route Ukrainian internet traffic to Russian networks to countless cyberattacks to major platform bans for Russian users. What I consistently hear from the Russian side is that, while the official bans on Facebook and Twitter have raised the barriers to finding unbiased information about the war (and all kinds of other things), people who want this stuff are using VPNs and other methods to get what they’re after. And while the platforms may be out of reach for the not-so-tech savvy, they still are being used by the Kremlin and its allies to promote its agenda, despite some companies’ efforts to reduce their digital power.

In one recent example, Meta has been on the hook for hosting divisive political ads in Moldova that were purchased by an exiled Russian oligarch, Ilan Shor, in what looks like an effort to destabilize the already fragile Eastern European nation. The ads promoted public protests against the generally pro-Western government, with the likely aim of pushing Moldova further into Russia’s sphere of influence. They were also purchased despite Shor being under sanctions by the U.S. — as a U.S. company, Meta shouldn’t be selling ads to Shor. It’s no surprise that this slipped through, since Facebook’s ad systems are almost entirely automated. But this still looks like a pretty big oops.

What role does online speech play in the escalating conflict in the West Bank? Activists in the region are tracking how images and videos of violence are being used to incite further attacks, following the killing of two Israeli brothers last weekend and the subsequent mob attack on the Palestinian community of Huwara. But they’re also bracing themselves for indiscriminate content removal, which has been a perennial issue during periods of heightened tensions, especially on platforms owned by Meta. Mona Shtaya, a Palestinian researcher who has studied bias in Facebook’s content moderation approach for some time, wrote for the Columbia Journalism Review this week about how her organization has become a de facto advocate for Palestinian speech on the platform, systematically documenting unjustified content removals, appealing to the company and, in many cases, winning their reinstatement. I’ll be keeping an eye out in the coming weeks to see how platforms’ decisions about political speech in Israel and Palestine play out at this moment of heightened violence.


One of the most telling responses to the recent earthquake in Turkey and Syria was the Turkish government’s rapid rollout of “Disinformation Reporting Service,” an app where anyone can file reports about “manipulative” information in the news or on social media.

The fact that this — a tool of information control — rose to the top of the government’s to-do list in the immediate aftermath of such a deadly disaster says a lot about the ruling party’s priorities. But it also marks an important change in how some governments are seeking to shape the information that people share and consume in the digital space.

Another recent example comes from Iraq. In January, the country’s Ministry of Interior launched a new online platform that encourages regular citizens to report material that “violates public morals, contains negative and indecent messages, and undermines social stability,” in accordance with the country’s penal code. In a video on the ministry’s YouTube channel, an official urged people to participate and emphasized the seriousness of this type of content, reasoning that it “undermines the values of the Iraqi family.” 

Authorities claim it’s working. Last month, the ministry said it received 96,000 reports on “indecent content.”

Rather than just penalizing people who post “problematic” material, governments are asking citizens to engage with it by calling it out through official channels. This kind of social engineering helps drive an effective strategy not only for keeping inconvenient information off the internet but also for shifting public perception of such information in a way that works to the government’s advantage. Although these platforms might be new in these two cases, the tactic is not. It reminds me of some of the ways that China’s Communist Party engages influencers in the work of “opinion shaping” on social media. When a government manages to get its own citizens working (for free) to eradicate dangerous information, they’ve really got it made.


  • The U.S. Supreme Court heard arguments in two landmark cases that could (but probably won’t) change the way the internet works. Tech Policy Press has gobs of analyses on the cases. If you’re looking for a quick bite, listen to On The Media’s short segment about the arguments and the issues in play.
  • Political scientist and tech ethics pioneer Rumman Chowdhury ran a team at Twitter that focused on mitigating harms that stemmed from the platform’s algorithms until Elon Musk came along. This week, she wrote for the Atlantic about her experience watching Musk destroy the company’s culture from the inside.
  • Internet shutdowns are still a favorite tool of control for governments across the globe. Access Now just published a deep dive on how these played out in 2022, looking at triggers, trends and emerging tactics in digital information control.