How AI is supercharging political disinformation ops
Were Slovakia’s elections rigged? Or was that just the artificial intelligence talking? Two days before Slovakians went to the polls last week, an explosive post made the rounds on Facebook. It was an audio recording of Progressive Slovakia party leader Michal Simecka telling a well-known journalist about his plan to buy votes from the country’s marginalized Roma minority. Or at least, that is what it sounded like. There was sufficient reason to believe that Simecka might have been desperate enough to do whatever it took to win the election — his party had been polling neck and neck against that of former Prime Minister Robert Fico, who resigned from the job back in 2018 amid anti-corruption protests following the murders of journalist Jan Kuciak and his fiancee Martina Kusnirova.
Simecka and the journalist who featured in the audio clip both quickly called it a fake, and fact-checking groups backed up their claims, noting that the digital file showed signs of having been manipulated using AI. But they were in a tough spot — the recording emerged during the 48-hour pre-polling period in which the media and politicians are restricted by law from speaking about elections at all. In the end, Progressive Slovakia lost to Fico’s Smer-SD party, and the political winds have quickly shifted. Fico ran on a populist platform, pledging that his government would “not give a single bullet” to Ukraine. Already heeding Fico’s word, the sitting president opposed a new military aid package for Ukraine just yesterday. And now Fico is expected to forge an alliance with Hungary’s Viktor Orban, the only EU head of state who has sided with Russia since the war began.
The possibility that a piece of evidence was fabricated using AI throws a new digital wrench into the already chaotic and oversaturated media landscape that all voters face in any election cycle. Slovakia isn’t the first country to run into this problem, and it definitely won’t be the last. Similar circumstances are expected in the run-up to Poland’s parliamentary elections later this month, where the war in Ukraine will very much be on the ballot, and where a victory for the right-wing Law and Justice party could add to Orban’s growing camp.
While the debunked audio clip in Slovakia was dutifully garnished with a fact-check label indicating that it may have been fabricated, it’s still making the rounds on Facebook.
In fact, Meta (owner of Facebook, Instagram and Threads) and Google (owner of YouTube) have both indicated in recent months their plans to roll back some of the disinformation-busting efforts that they trotted out following the 2016 election in the U.S. But it is X, formerly known as Twitter, that is leading in the race to the bottom — every week, we see more signs that it has little interest in enforcing its rules on disinformation.
Even the EU itself has brought this up: Last week, European Commission Vice President Vera Jourova called X out on the issue. “Russian propaganda and disinformation is still very present on online platforms. This is not business as usual; the Kremlin fights with bombs in Ukraine, but with words everywhere else, including in the EU,” Jourova said.
Although I was never all that convinced by their fact-checking efforts, it doesn’t help that the tech giants seem to have thrown up their hands on the issue. It leaves me almost nostalgic for a time when all we had to deal with was straight-up false or racist messages flooding the zone. Turns out, things could and did get worse. 2024, here we come.
A Russian blogger was sentenced to eight and a half years in prison after being convicted of reporting “fake” news about Russian military actions in Ukraine. This type of journalism became a crime in Russia shortly after Russian forces launched the full-scale invasion of Ukraine in February 2022. Aleksandr Nozdrinov was arrested not long after the war began, and was finally dealt a sentence this week. Nozdrinov maintained a YouTube channel where he regularly posted video evidence of police corruption and malfeasance for an audience of more than 34,000 subscribers. According to the Committee to Protect Journalists, Nozdrinov denies having posted the material cited by prosecutors. He believes that the case against him was fabricated by authorities intent on targeting him in retaliation for his anti-corruption activities on YouTube.
Monday marked the fifth anniversary of the murder of Washington Post columnist Jamal Khashoggi, a Saudi exile and frequent critic of the Saudi Arabian regime. There is little doubt that Khashoggi’s gruesome killing inside the Saudi consulate in Istanbul came at the behest of Crown Prince Mohammed bin Salman. It came out later on that Khashoggi and some of his closest family members and colleagues were targeted with Pegasus, the notoriously invasive mobile phone spyware built by the Israeli firm NSO Group and used to spy on journalists in more than 50 countries, from Mexico to Morocco to India. The digital dimensions of Saudi Arabia’s tactics of repression don’t stop here, and they certainly are not news. But they do bear repeating.
Researchers in Australia think anti-Indigenous narratives on social media could swing an upcoming referendum. Tomorrow, Australians will vote on whether or not the country should establish a body that would advise the government on policy decisions affecting Aboriginal and Torres Strait Islander communities. A year ago, public opinion polls indicated that most Aussies — including Prime Minister Anthony Albanese — were in favor of the measure. But that has changed in recent months, and social science researchers say viral, racialized anti-Indigenous messaging campaigns on X and TikTok might have something to do with it. The Conversation is running a series on the issue — they’re worth a read.
WHAT I’M NOT READING: THE NEW MUSK BIOGRAPHY
Instead of reading Walter Isaacson’s new biography of Elon Musk, I have been lapping up the reviews and emoji-hearting other people’s dedication to pointing out everything that somehow failed to make the cut in this 670-page “insight-free doorstop of a book” (Gary Shteyngart’s words, not mine).
In the tome’s final pages, Isaacson writes: “Sometimes great innovators are risk-seeking man-children who resist potty training.” Um, what? As Jill Lepore wrote in The New Yorker: “This is a disconcerting thing to read on page 615 of a biography of a fifty-two-year-old man about whom a case could be made that he wields more power than any other person on the planet who isn’t in charge of a nuclear arsenal.” Since Isaacson didn’t, Lepore took it upon herself to school readers on some of the harsh political realities of apartheid-era South Africa where Musk grew up, noting that his maternal grandfather apparently moved the family from Canada to South Africa because of apartheid. She touches on grandpa’s openly antisemitic views, which Isaacson somehow writes off as “quirky.”
The book also has some pretty serious whoopsies when it comes to details about Musk’s financial moves. In Financial Times columnist Bryce Elder’s acid assessment: “When it comes to money, Isaacson is more a transcriber than a biographer.” Eesh.
Writing for The Atlantic, Sarah Frier had what feels to me like the truest line: “We don’t need to understand how he thinks and feels as much as we need to understand how he managed to amass so much power, and the broad societal impact of his choices — in short, how thoroughly this mercurial leader of six companies has become an architect of our future.”