In a year full of elections, does Big Tech have democracy’s back?

Ellery Roberts Biddle


It’s a new year and the artificial intelligence wing of the tech industry is still dominating the headlines and crashing through our lives like a recent Stanford dropout who’s had way too much to drink. But there’s plenty else on our radar here at Coda. 

We’re looking at big changes on the horizon in what promises to be a pivotal year for elections and democratic institutions. More than 2 billion people will vote in 65 countries around the world, and technology will be a critical factor in every aspect of these elections, from information sourcing to actual polling machines.

In Bangladesh, they’ve already been to the polls. On January 7, Prime Minister Sheikh Hasina sailed to reelection with an extra boost from her cyber army. She claimed a record fifth term in office (fourth in a row) and her ruling Awami League secured a hefty majority in the country’s parliament. But there wasn’t much contest — the opposition Bangladesh Nationalist Party decided back in November to boycott the election, after thousands of party members and supporters were jailed on what they say are spurious criminal charges. In the end, a dismal 40% of eligible voters cast ballots, according to the country’s chief election commissioner.

In the months before the election, AI-generated misinformation and manipulation were rife, including a series of deepfakes targeting opposition candidates that went viral on Facebook. The Awami League’s “official think tank” is known for the thousands of people it employs to promote its messages on Facebook, which remains the go-to information platform for nearly a quarter of the country’s population of 170 million. In one such fake video, exiled opposition leader Tarique Rahman appeared to be urging party members to keep quiet about their concerns for Gazans under bombardment by Israel, lest it upset the United States. In Muslim-majority Bangladesh, most people identify closely with the Palestinian cause. An opposition party official told the Financial Times that he had attempted to get Facebook’s parent company Meta to remove some of these videos, but that “most of the time they don’t bother to reply.”

A spokesperson for the U.S. State Department said the Bangladesh election was “not free or fair.” And AI played a key role in ensuring that the ruling party could bully and bluster its way back to power.

After Bangladesh, it’s Taiwan’s turn to head to the polls. The election is this Saturday, and at stake could be the island’s future autonomy. William Lai, current vice president to Tsai Ing-wen, is favored to win and would represent a continuation of Taiwan’s delicate position of asserting its self-governance in the face of China’s claims to the territory. These claims have ramped up considerably in recent years, as have Beijing-driven foreign influence campaigns both in traditional media and online. In December, researchers at the network analysis firm Graphika identified a network of accounts across Facebook, YouTube and TikTok that were working together to promote the opposition KMT party, which is considered to be friendly to Beijing. The accounts also played up news stories that cast pro-independence incumbents as incompetent and corrupt. These kinds of narratives are a constant on social media, but their influence could prove pivotal on the eve of elections.

What does Meta have to say about all this misinformation and propaganda? Not enough. Despite being one of the most profitable tech companies in the world, the Silicon Valley behemoth made substantial cuts in 2023, laying off staff researchers who were sometimes able to intervene if they saw online activity that could undermine important election-related information or lead to widespread violence. As if inspired by the five-alarm dumpster fire formerly known as Twitter, Meta has been notably blasé about its decision to gut these teams. Publicly, the company has only laid out plans for how it will handle the 2024 election in the U.S. Apparently, the rest of the world is on its own.

And what about elections in the U.S.? In an election year, it’s hard not to look back at just how bad things can become when platforms as big and powerful as Facebook play host to movements like “Stop the Steal,” which was predicated on the false belief that Donald Trump actually won the 2020 presidential election. That particular campaign helped propel plans for the attack on the U.S. capitol on January 6, 2021, which left five people dead, more than a hundred police officers injured, and at least half of the country genuinely fearful for the survival of American democracy.

U.S. prosecutors have since brought criminal charges against more than 1,000 people who participated, and social media posts have provided key evidence of their intentions to cause harm. Social media posts from that time also have been leveraged in the litany of election subversion cases against Trump himself.

Throughout Trump’s time in office, the major social media companies gave him an enormous platform for his message, even when he was promoting dangerous ideas that fell afoul of their policies. But it took four years and a coup attempt for them to act — just hours after rioters descended on the U.S. capitol, Facebook, Twitter and a handful of other platforms decided to “de-platform” the Donald. But by that point, much of the damage had already been done.

First Amendment scholar evelyn douek broke it down brilliantly at the time: “This was a display of awesome power, not an acknowledgment of culpability,” she wrote in The Atlantic. “A tiny group of people in Silicon Valley are defining modern discourse, ostensibly establishing a Twilight Zone where the rules are something between democratic governance and journalism, but they’re doing it on the fly in ways that suit them. In two weeks, Trump will be out of power, but platforms won’t be. They should be forced to live up to the sentiments in their fig-leaf rationales.” If you’re looking for a laugh here, comedian Trevor Noah did a pretty good segment making a similar point.

Would things have been different if the companies had intervened sooner? We’ll never know. On Tuesday, as Trump walked out of an appeals court hearing, the former president told reporters that if the charges against him interfere with his candidacy in the 2024 presidential election, it will lead to “bedlam in the country.” Eesh. If Trump’s threat is any indication, we just may get to see round two in 2024.


  • Social media scholar Joan Donovan, who recently filed a whistleblower complaint over her suspicious dismissal from a top-tier research job at Harvard University (which I covered last month), has a new piece for The Conversation about January 6, digital disinformation and the concept of “networked incitement.”
  • Speaking of disinformation, I recommend this new report from ProPublica and Columbia University’s Tow Center that shows how verified accounts on X are getting plenty of traction spreading disinformation about the war in Gaza.
  • And if you’re curious about The New York Times’ lawsuit against OpenAI, and whether or not it has legs, read Mike Masnick’s breakdown of the suit and its shortcomings. He suggests it could wind up erecting copyright barriers that further entrench the power of the biggest AI companies. I worry he’s right.