‘Small boat’ photos could be censored in the UK, Google’s abortion data problem, the low wage workers  behind AI

Ellery Roberts Biddle

 

When Edward Snowden showed just how much mass surveillance the U.S. government carries out each day, many in the digital privacy field thought it would mark a turning point for the U.S. We thought people would start to care more about their privacy and that legislators would make policies to protect it. But this never quite took hold. It didn’t help that Snowden wound up in Russia, the ultimate surveillance state. For years, friends and family would say to me things like, “I don’t like it, but I don’t worry about it either. I have nothing to hide.” Ten years after the fact, I think the tide may finally start to turn.

The real-life effects of mass surveillance by state and corporate actors are becoming difficult to ignore. ProPublica brought one to the surface this week, with a hard look at data collection on websites that sell abortion-inducing medications. Reporters found third-party trackers collecting users’ search history, location, device information and a unique browser number that can be tied to the individual. With no data privacy laws standing in their way (and HIPAA doesn’t apply here), all this data is funneled to companies that serve online ads, Google being the largest. Since Google knows a whole lot about most of us — thanks to Gmail, Google Maps and a half a dozen other services that you probably used today — the company is now (perhaps unwittingly) in possession of a lot of information about who’s getting or seeking an abortion. If you’re doing this in a state where abortion is now illegal, your data could be obtained with a subpoena and presented in court. How do you like your privacy now?

Indian officials don’t want anyone to see a new BBC documentary about the 2002 riots in Gujarat. Invoking “emergency powers,” the government ordered YouTube to censor clips and Twitter to remove links to the film. My colleague Shougat Dasgupta wrote for Coda that the film “held Narendra Modi, then chief minister of Gujarat, ‘directly responsible’ for enabling three days of horrifying violence…[that] resulted in the deaths of a thousand people — nearly 800 of them Muslim.” The ban has only made people want to see the film more. University students have begun holding screenings in defiance of police and university authorities.

And the U.K. Parliament might ban images of immigrants arriving on small boats that depict such scenes in a “positive light.” The ban could be tacked onto a bill that claims to be about online safety. Proponents say that this will help reduce “illegal immigration” and reference organized crime groups that have used social media to promote transiting by sea. If this absurdity should somehow become law, Big Tech companies like Meta and Google will have to fish out pictures and videos of these scenarios and remove them from their sites. How do you teach an AI to find this stuff? With underpaid human labor, of course. More on this below.

Migrants preparing to cross the English channel are up against millions of dollars worth of security and surveillance infrastructure. Our team dug into this for Undercurrents, our new podcast with Audible.

Subscribe and give it a listen.

THE REAL PEOPLE BEHIND AI

Last week, I noted that Chat GPT has some serious problems, but as one reader pointed out, I  offered no further details on it. So here’s a quick breakdown. Like most AI-driven tools, the chatbot was trained using data from the real world. This means it has just as much capacity to recite love sonnets as it does to spout violent or racist screeds. The Intercept’s Sam Biddle (no relation to me) caught the software engaging in racial profiling. Cybersecurity experts Bruce Schneier and Nathan Sanders wrote about its potential to affect lobbying or even election outcomes. And a number of people have observed that Chat GPT often says things that are simply not true.

It is worth taking a look under the hood here. In 2020, an earlier version of Chat GPT (known as GPT-3) was skewered by critics after it began rattling off racist epithets and threatening sexual violence with aplomb. So this time around, the company created a system for detecting hateful or harmful speech and taught the AI to filter it out.

This harm-detection system is not another robot. It’s drawn from an entire industry of people, typically contract workers, who review dozens, if not hundreds, of pieces of text, images or videos that either appear on social media platforms or are being used, in the case of OpenAI, to build a new technology. They are doing the dirty work of the internet, helping to rid it of racist diatribes, videos of beheadings, sexual abuse — even though some of this stuff still remains. They are seeing the worst of the worst. 

This industry provides an essential service. Nevertheless, the biggest of the Big Techs typically outsource it to third-party companies operating in countries where labor is relatively cheap, like India or the Philippines. Or Kenya. Just this week, TIME’s Billy Perrigo dropped a new investigation of Sama, a third-party company that was under contract with OpenAI (maker of Chat GPT) to do just this kind of work in Nairobi. Workers reported being paid between $1-2 per hour and having limited access to counseling — a serious problem in a job that asks you to trawl through the dregs of the internet.

But workers are beginning to take a stand and force companies to address some of these problems. Until recently, another major client of Sama’s was Facebook. But last fall, a former employee brought a landmark lawsuit against both companies in Kenya, claiming worker exploitation and illegal “union-busting” activities. Now Meta has ended its relationship with Sama and begun a contract with Majorel, a Luxembourg-based company with a similarly dubious track record. We’ll see where it lands and whether all this might help people see that tech isn’t so magical after all — it’s the result of people, in lots of places, working really hard. But only some of them profit from it.

WHAT WE’RE READING (& LISTENING TO)

  • The Indian government has blocked tens of thousands of websites, apps and social media posts since 2015. Check out the New Delhi-based Software Freedom Law Centre’s important new report on these figures.
  • The Border Chronicle has an excellent deep dive on technical surveillance along the U.S.-Mexico border that starts with a robotic dog and continues with some dirt on the multinational IT services company Accenture. 
  • Mobile phones are technically forbidden in U.S. prisons, but people are using them anyway to tell stories about their lives and the conditions on the inside. The Marshall Project has the goods.

From biometrics to surveillance — when people in power abuse technology, the rest of us suffer

More Coda Newsletters