Peter Pomerantsev on information wars, Trump, Putin, and regulations

Peter Pomerantsev


Lost in the melee around Ukrainegate has been the revelation of Trump telling the Russian leadership that he was “unconcerned about Moscow’s interference in the 2016 U.S. presidential election because the United States did the same in other countries.” As if to underline this equivalency the Russian Parliament just passed a resolution attacking Deutsche Welle, the German public broadcaster, for “interference” in Russian domestic affairs for their coverage of anti-Kremlin protests in Russia.

Trump’s equivalency signals the true victory of Russian “information war” — which is not just a series of media techniques, but a quasi philosophy that explains all history and current affairs through the prism of information manipulation, a vision where there are no values and no genuine debate but only hoodwinking, where there is no difference between journalism and a psy-op. 

As I’ve written before for Coda, Russian international relations thinkers with a conspiratorial bent first started to develop the idea of “information war” as a quasi ideology in the late 1990s and early 2000s, arguing that the USSR lost the Cold War not because of an inferior political system and dearth of economic achievements, but due to “information viruses” like Perestroika and Glasnost (economic reform and enhanced freedom of expression) planted by the West and a fifth column of reformers and dissidents. For a long time such ideas were not in the mainstream, but as the Kremlin searched for ways to explain away protests at home and revolutions against Kremlin friendly authoritarians and kleptocrats abroad, the “information war” philosophy was brought into the mainstream. Today, Kremlin media, its spokespeople, and just anyone who wants to curry favor with it, accuse Western media, anti-corruption investigations and even internet companies of being part of a concentration of non-kinetic forces hell bent on stopping Russia “rise off its knees.” And if the West is up to such malarkey, then why shouldn’t Russia launch the odd hack, cyber-attack and sneaky social media campaign against the West. Donald Trump certainly agrees. 

But the problem isn’t only Trump. To respond, in part, to the Kremlin’s social media campaigns, as well as the rise of similar disinformation campaigns domestically, governments in the West have been adopting laws to clamp down on “fake news,” on content that is not a priori illegal, but untrue or, in “harmful” as put a new UK White Paper on the subject. This legislation, which is based around censoring content, plays right into the Kremlin’s desire to normalize censorship, to put governments in charge of deciding what is harmful, and pushes us towards its desired end state of “sovereign internets.” In the US such legislation is less likely due to the First Amendment, but the language that we use to define the Russian operations, words like “interference” and “meddling,” play into the equivalency of a Trump or Putin: these subjective, value free terms suggest that any type of information flowing across borders is “meddling,” that there is no difference between journalism by Deutsche Welle and a Kremlin troll farm. 

Instead, as a series of new policy papers from a Transatlantic Working Group on content moderation I am part of argues, regulation should focus not on disinformation but on deceptive behavior: the problem is not so much any individual piece of content, true or not, but that it is part of a deceptive campaign. Moreover, as the new release of details of the Russian online campaigns this week reminds us, a lot of content pushed by Kremlin campaigns is not so much untrue as simply partisan. It’s the architecture of the whole campaign that needs to be taken down — focus on an individual meme and you miss the point. Of the many papers published by the Working Group perhaps prioritize the one by Camille Francois, who has done as much forensic work as anyone on the Russian campaign, and whose research on troll campaigns across the world is also included in my new book, This Is Not Propaganda: Adventures in the War Against Reality.

Regulating deceptive behavior does not mean all anonymity should be banned — anonymity is often necessary to protect dissidents. But in these cases, we know they are anonymous, individual accounts. The problem is when you have whole barrages of deceptively created websites, Facebook profiles, and bots that pretend to be something they aren’t, and which are part of something that is meant to look organic but is planted. 

This push for regulating against deceptive behavior is part of an overall effort we need to make the internet interpretable: we need to make algorithms transparent so we understand why they show us one piece of information and not another; we need to know which bits of our data are used to tailor messages to us in a campaign; which messages a political campaign is showing one audience and not another. As I argue in This is Not Propaganda, without this we can’t have a common information space to have the common debate that is the essence of deliberative democracy. And without such transparency how will fact-checkers know what audiences to target with their work, which bits of dis-informing content are being targeted by whom and why? The EU is launching a hub for fact-checkers, but what we really need the Commission to do is to get the regulation of the internet right to make fact-checking more effective. Without a transparent internet their work risks remaining hit and miss. Likewise, “media literacy:” for us to think critically about content we see online, the means and aims of its production need to be made interpretable. 

Disinfo Matters looks beyond fake news to examine how the manipulation of narratives and rewriting of history is reshaping our world.