When AI brings ‘ugly things’ to democracy

Ellery Roberts Biddle

 

National elections were held in Indonesia this week, and early vote counts suggest that Defense Minister Prabowo Subianto — who was an army lieutenant general during the bloody dictatorship of Suharto and has been accused of facilitating human rights abuses — will claim victory. Subianto ran two unsuccessful campaigns in the past, but this time around, he got a healthy boost from generative artificial intelligence tools, including Midjourney, the source of a cute and cuddly animated Subianto avatar that became his campaign’s signature image. Staffers and consultants who worked on the campaign and down-ballot races also told Reuters that they were using OpenAI’s products to “craft hyper-local campaign strategies and speeches.”

The campaign did this in plain violation of both Midjourney and OpenAI’s usage policies, which specifically prohibit customers from using their technology for political campaigns. 

Why didn’t the companies step in? For its part, OpenAI told Reuters that it’s investigating the issue. Midjourney did not comment. Either way, it’s hard not to see a parallel here with Dean Philipps, a longshot Democratic presidential candidate in the U.S. whose campaign used OpenAI’s technology to run a chatbot promoting his messages. Although the campaign was clearly violating company policy, it was only after The Washington Post reported on the bot that OpenAI pulled the plug on the developer who built it.

Both stories raise an important question, especially for OpenAI, the most influential player on the generative AI field at the moment: Apart from acting on media inquiries, what does OpenAI do to mitigate political abuses of its tools? A spokesperson who declined to be named told me that OpenAI uses “automated systems and human review to identify and address violations of our policies on our API.” When violations happen, the company may flag the incident for “human review” or suspend the user altogether. In big-picture terms, the response suggests that OpenAI is following the playbook of its Big Tech forefathers like Facebook and YouTube.

That’s worrisome, because even if the tech here is new, the problem of political actors abusing Big Tech tools isn’t. It may be too soon to know how Subianto will use or abuse technology once in power, but it’s worth looking back over the past decade on how politicians have weaponized social media platforms to promote their agendas and sometimes spread outright lies. When profit-hungry tech companies are determined to operate worldwide, they know their products are at risk of being abused. To date, none of the biggest tech companies have truly succeeded at getting ahead of serious abuses on a global scale. It doesn’t help that the worst real-world consequences of these dynamics often play out far, far away from Silicon Valley.

I chatted about this issue a few weeks back with Glenn Ellingson, an ex-Meta integrity engineer who now works with the Integrity Institute, a research group composed mostly of folks who previously held harm reduction roles at Big Tech companies. Ellingson talked about how it’s hard for companies to be “native” in every geographic context.

“Looking at down-ballot elections even in big countries like the United States, or elections that may be national in scope but in smaller nations or nations which your own staff is less culturally connected to, it gets harder and very expensive — maybe impractically expensive — to really be native in each of those contexts,” Ellingson told me. “This means that big global companies will probably catch the big stuff, but they probably won’t be as able to catch problems in all these diverse smaller contexts.”

We talked about places like Myanmar and Ethiopia, two “diverse” though certainly not small contexts in which Meta’s platforms were notoriously abused by military and political leaders perpetrating war crimes. As Ellingson noted, “ugly things” most often emerge “in geographies that don’t really get the attention until something really bad has happened.” And by then, it is too late.

GLOBAL NEWS

In India, AI is giving politicians a bump from beyond the grave. Although he passed away in 2018, recordings of the voice and likeness of Tamil Nadu politician M Karunanidhi have been used recently to create videos in which Karunanidhi has promoted prominent figures in Dravida Munnetra Kazhagam, the political party that he once led. Speaking with tech journalist Nilesh Christopher, the Mozilla Foundation’s Amber Sinha drew a clear distinction between uses like this and like the one in Indonesia. “The use of AI to create synthetic audio and video by a living person who has signed off on the content is one thing. It is quite another to resurrect a dead person and ascribe opinions to them,” she said.

Singapore’s LLM favors “official” histories. The government of Singapore recently launched a large language model — what powers generative AI tools like ChatGPT — that is built on major languages of Southeast Asia, including Bahasa Indonesia, Thai and Vietnamese. This is critical in a global market where English dominates the AI development landscape. But as Context’s Rina Chandran reported this week, there’s a problem with Singapore’s model: Similar to other state-led LLM initiatives, it tends to reflect “official” narratives about national history and political figures. Consider Indonesia’s Suharto. While LLMs built by Meta and OpenAI will tell you about the military dictator’s poor human rights record, the Singaporean model focuses “largely on his achievements.” Eesh.

Spyware’s everywhere. Pegasus, the pernicious mobile surveillance software made by NSO Group, was used to target a “very long” list of people in Poland during the country’s previous administration, under the right-wing Law and Justice Party. New Polish Prime Minister Donald Tusk called out his predecessors on the matter at a press briefing on Tuesday. This is not exactly news — spyware researchers at the University of Toronto’s Citizen Lab had independently investigated and verified suspected infections back in 2021 — but it does confirm what the technical research had already shown, with the political oomph of a head of government to boot.

WHAT WE’RE READING

  • If you’re looking for more reasons to worry about how Big Tech will affect upcoming elections around the world, check out this new report from Paul M. Barrett’s group at New York University.
  • And if Valentine’s Day had you wondering why your dating app isn’t giving you better results, read media scholar Apryl Williams’ insightful piece on the racial biases embedded in algorithms for apps like Tinder, OkCupid and Hinge. “When we refuse to examine our own prejudices,” Williams writes, “we may miss the perfect match.”