Black box AI and the death of the digital public square

Ellery Roberts Biddle

 

About a year ago, it became popular for Western media commentators to sound the death knell for the social web. Elon Musk “sunk in” as the new owner of Twitter, and the mainstream social media platform that had come closest to approximating a digital public square began its spectacular decline.

Social media was once a place to hear and express opinions, to get and report the news, to decide what might or might not be true. All these things beckoned us to interact with each other and also to understand, and sometimes challenge, the underlying technology. When content got censored or harassment got unbearable, users spoke up and pressured the companies to respond. Even if it was all happening in a privately owned “quasi-public sphere,” users behaved as if they had some rights. And every once in a while, the companies gave that idea some credence.

Watching artificial intelligence’s biggest purveyors soar to prominence in the global political imagination this year, I’ve found myself wondering: What will happen to all that democratic energy around Big Tech? What will happen to the idea of digital rights?

Unlike some of the mammoth social platforms that dominated the industry for the past decade and a half, the shiny new things we see on our screens now, like ChatGPT, reveal very little about their inner workings. The biggest and most consequential types of AI at this moment are being built inside black boxes, and it isn’t predicated on any of the ideas about human connection that were used to underwrite the social media industry. There is no illusion of democracy here, no signs of cohesion among users pushing companies to change in any particular way. The reason is simple: We really don’t know what’s going on behind the screen. 

For tech elites and tech-inclined media, AI’s meteoric rise has made for great theater. But for most of us, much of what is going on is shrouded in mystery and obfuscation. Alongside it all, though, far less magical kinds of tech have continued to change the way we live, work and understand the world around us. This has been the core focus of our tech coverage at Coda this year.

Some of our strongest tech stories helped show how the digitization of public systems and widespread real-time surveillance are changing urban life. Drawing on research from the Edgelands Institute, we paired writers in Medellín, Nairobi and Geneva with photographers from the Magnum network to build a rich narrative and visual tapestry that wrestled with the social and psychological effects of these systems, alongside their technical components. And one of our top-performing features, from Bruno Fellow Anna-Cat Brigida, dove deep into how police surveillance systems in Honduras have bolstered a state determined to “protect its own power and preserve its status as Central America’s largest drug corridor.”

In 2023, we also took a hard look at the ever-expanding role of technology in migration. Coda’s Isobel Cockerell traveled to Kukes, Albania, where she reported on how digital platforms like TikTok and Instagram have played a pivotal part in driving thousands of young men to leave Albania for England, often on small boats and without proper paperwork, only to find themselves indebted to smugglers and criminal gangs.

Surveillance and digitization have become part and parcel of apparatuses of control on national borders. In May, Zach Campbell and Lorenzo D’Agostino introduced us to Fabrice Ngo, a Cameroonian car mechanic who nearly lost his life on a small boat heading for Italy from Tunisia, after Tunisian coast guard officials tracked the vessel and seized its motor. In an exclusive investigation for Coda, Zach and Lorenzo were able to link Ngo’s experience to the dealings of the International Centre for Migration Policy Development, a Vienna-based agency that has received hundreds of millions of euros in contracts from the European Union to supply tools and tactics — including surveillance tech — to countries bordering the EU bloc in exchange for their cooperation in preventing people from migrating to Europe. With more than 2,500 migrants having died trying to cross the Mediterranean Sea this year, the consequences of these agreements, and the technologies they deploy, couldn’t be more stark.

The dangers and shortcomings of tech are evident on the U.S.-Mexico border too. Former Coda reporter Erica Hellerstein told us the story of Kat, a woman who had fled gang violence in Honduras, only to find herself unable to seek asylum in the U.S. because of a faulty smartphone app. This spring feature took a long look at the Biden administration’s decision to outsource some of the most critical steps in the asylum-seeking process to the app, called CBP One.

But that story also found a glimmer of hope on the horizon for 2024. In August, an immigrants’ rights coalition filed a class-action lawsuit against the Biden administration over its use of the app, setting the stage for a showdown over the digitization of immigration and the principles underlying the modern asylum system. As Erica wrote in her follow-up, “Imagine telling the authors of the modern asylum system, which was created after the Holocaust, that this guarantee is only accessible to people who arrive at the border with a miniature computer in their pocket.”

This year, we also set our sights on understanding more deeply what kinds of labor go into the technologies that are changing our lives. In the fall, Erica introduced us to the world of social media content moderation in Nairobi’s “Silicon Savanna.” Moderators spoke of reviewing hundreds of posts each day, from videos of racist diatribes to beheadings and sexual abuse. On low wages and minimal benefits, these workers ensure that the worst stuff posted online never reaches our screens. But the toll this takes on their lives and mental health has brought the labor force to a breaking point. As Wabe, a moderator from Ethiopia, told Erica: “We have been ruined. We were the ones protecting the whole continent of Africa. That’s why we were treated like slaves.”

It sounds grim, but what drew us to this story was what Wabe and nearly 200 other moderators have decided to do about their situation. In March, they brought a lawsuit against Meta that took the company to task over poor working conditions, low pay and several cases of unfair dismissal. They’ve also voted to form a new trade union that they hope will force tech companies to change their ways. These developments could mark a turning point for the industry, and for the way we understand labor in the context of Big Tech. It sheds a not entirely flattering light on the massive human labor force that powers all of the technology we use, AI included. The work of people like Wabe to hold Big Tech to account is helping all this to become more visible to the rest of us, something that we have to grapple with as more and more aspects of our lives become digitized. And that gives me some hope for the future.