Paco Freire/SOPA Images/LightRocket via Getty Images; Visuals by Anastasia Gviniashvili
Tech companies must engage in the fight against extremism
While the U.S. government has named the Russian Imperial Movement a ‘specially designated global terrorist,’ more needs to be done to limit the spread of hate speech on the internet
The Russian Imperial Movement (RIM) is a small far-right paramilitary group based in St Petersburg and dedicated to the restoration of an ethnic Russian empire. As of this month, it has the distinction of being the first white supremacist group to be named a “specially designated global terrorist” by the U.S. government. The announcement by the State Department on April 6 has been heralded as a major shift for the agency, which until now has overwhelmingly focused its counterterrorism efforts on Islamic extremist groups. This new designation will allow the U.S. to prevent its citizens from providing material or financial support to RIM, and to block the group’s leaders from entering its territory.
Explaining its decision, the State Department cited a global surge in far-right terrorism since 2015, including last year’s mass shooting at two mosques in Christchurch, New Zealand, and a string of similar attacks on Muslims, Jews and people of immigrant backgrounds within the U.S. itself. Nathan A Sales, the department’s counter-terrorism coordinator, praised Donald Trump’s leadership on the issue — a claim likely to raise eyebrows, given that the U.S. president frequently uses far-right rhetoric and has sought to downplay the severity of white supremacist violence on American soil.
Like other similar groups elsewhere, the RIM combines real-world violence — it has reportedly recruited volunteers to fight alongside Russian-backed separatists in eastern Ukraine — with heavy use of social media to broaden its reach. According to U.S.-based think tank the Soufan Center, the group distributed propaganda videos in Russian and English via YouTube, Facebook and other platforms, in order to spread racist material about Jews and Ukrainians, promote the use of weapons and encourage followers to see the West as an enemy. Immediately after April 6, the RIM tried to use its new-found notoriety as an online recruiting tool, but most of its social media accounts have since been removed.
It is questionable how far the group’s reach ever extended. Its page on the popular Russian social network Vkontakte, which was still active as of April 14, lists a relatively low 14,000 followers. Although the RIM does not appear to have gathered a large following, it has apparently made connections with like-minded individuals and groups elsewhere. According to the State Department, two Swedish neo-Nazis who visited RIM’s St Petersburg training camp went on to commit several attacks in their home country, including the bombing of a migrant center in Gothenburg.
What seems more important to the U.S. is the organization’s geopolitical significance. It is telling that the country’s first international move against far-right extremism targets a group that has intervened in the war against Ukraine, one of its allies. Reports in U.S. media have suggested that one aim of the new policy is to pressure the Russian government into cracking down on the RIM.
It is less clear, however, how this helps address the global problem of mass killings by right-wing extremists — which is what the State Department said it aims to do. In the past decade, a range of far-right activists have exploited patchy regulation and freedom-of-speech concerns to spread racist and anti-democratic material on social media and other online platforms. Some have formed groups that directly advocate violence or seek to carry out attacks: this month, police in Estonia arrested a suspect they believe to be the ringleader of Feuerkrieg Division, the local branch of an international neo-Nazi network coordinated through encrypted online forums. They discovered he was a 13-year-old boy.
On the internet and on social media, the relationship between ideology and violence can be pronounced. Many of the most devastating far-right attacks have been carried out by individuals who have immersed themselves in a broader, transnational online ecosystem and hope that their actions will inspire others to commit similar crimes. This model was established by Anders Behring Breivik, who murdered 77 people in Norway in 2011.
Recent technology gives this a new intensity, with a number of perpetrators — in Christchurch, or at the synagogue shooting in Halle, Germany, last year — adopting the aesthetics of video gamers as they livestream their attacks. The researcher Julia Ebner describes this as the “gamification” of terrorism. Elsewhere, individuals without a previous history of far-right activity have been radicalized by material found online. Darren Osborne, who drove a van into a crowd of people outside a Muslim cultural center in London in 2017, killing one person, spent three weeks before the attack seeking out anti-Muslim propaganda on the internet.
Owing to differences in local laws and regulations, it is difficult for governments to tackle the international proliferation of such material. The long-running U.S.-based neo-Nazi website Stormfront, for instance, has in the past benefited from free speech protections under the country’s First Amendment. Germany, by contrast, recently passed legislation restricting online hate speech, with large fines for tech companies that fail to promptly remove harmful content. And when one route to the public is closed off, far-right activists frequently find another. A joint investigation by The Atlantic and ProPublica recently found that white supremacists banned from other platforms have turned to Amazon Kindle’s self-publishing service to disseminate their ideas.
In the long run, this ongoing process of identifying and blocking outlets where far-right propaganda is spread may prove to be the most effective way of limiting the damage it causes. The British anti-fascist organization Hope Not Hate talks of digital deplatforming: pressuring social media companies to enforce existing laws, or revise their community standards, so that extremist voices can be marginalized, if not shut down entirely. A more convincing effort by the U.S. to tackle far-right violence would address these issues — not least because most of the dominant global social media companies are based there.
But regulating tech and enforcing the law, however justified, will not be sufficient. Far-right ideology feeds on the prejudices and the fears of wider society. We are witnessing its resurgence because politics in many countries has become dominated by individuals who seek to mobilize those same fears. A report last year by the Institute for Strategic Dialogue, a UK-based counter-extremism think tank, found that Trump himself had helped spread the “white genocide” conspiracy theory, an idea said to have motivated several mass shooters in the U.S. and elsewhere. This wider problem can ultimately only be addressed through a democratic political challenge to right-wing nationalism.
The story you just read is a small piece of a complex and an ever-changing storyline we are following as part of our coverage. These overarching storylines — whether the disinformation campaigns that are feeding the war on truth or the new technologies strengthening the growing authoritarianism, are the crises that Coda covers relentlessly and with singular focus. But we can’t do it without your help. Support journalism that stays on the story.