This article is an adapted extract from CAPTURED, our new podcast series with Audible about the secret behind Silicon Valley’s AI Takeover. Click here to listen.  

We’re moving slowly through the traffic in the heart of the Kenyan capital, Nairobi. Gleaming office blocks have sprung up in the past few years, looming over the townhouses and shopping malls. We’re with a young man named James Oyange — but everyone who knows him calls him Mojez. He’s peering out the window of our 4×4, staring up at the high-rise building where he used to work. 

Mojez first walked into that building three years ago, as a twenty-five-year-old, thinking he would be working in a customer service role at a call center. As the car crawled along, I asked him what he would say to that young man now. He told me he’d tell his younger self something very simple:

“The world is an evil place, and nobody’s coming to save you.”

It wasn’t until Mojez started work that he realised what his job really required him to do. And the toll it would take.

Your Early Warning System

This story is part of “Captured”, our special issue in which we ask whether AI, as it becomes integrated into every part of our lives, is now a belief system. Who are the prophets? What are the commandments? Is there an ethical code? How do the AI evangelists imagine the future? And what does that future mean for the rest of us?


It turned out, Mojez’s job wasn’t in customer service. It wasn’t even in a call center. His job was to be a “Content Moderator,” working for social media giants via an outsourcing company. He had to read and watch the most hateful, violent, grotesque content released on the internet and get it taken down so the rest of us didn’t have to see it. And the experience changed the way he thought about the world. 

“You tend to look at people differently,” he said, talking about how he would go down the street and think of the people he had seen in the videos — and wonder if passersby could do the same things, behave in the same ways. “Can you be the person who, you know, defiled this baby? Or I might be sitting down with somebody who has just come from abusing their wife, you know.”

There was a time – and it wasn’t that long ago – when things like child pornography and neo-Nazi propaganda were relegated to the darkest corners of the internet. But with the rise of algorithms that can spread this kind of content to anyone who might click on it, social media companies have scrambled to amass an army of hidden workers to clean up the mess.

These workers are kept hidden for a reason. They say if slaughterhouses had glass walls, the world would stop eating meat. And if tech companies were to reveal what they make these digital workers do, day in and day out, perhaps the world would stop using their platforms.

This isn’t just about “filtering content.” It’s about the human infrastructure that makes our frictionless digital world possible – the workers who bear witness to humanity’s darkest impulses so that the rest of us don’t have to.

Mojez is fed up with being invisible. He’s trying to organise a union of digital workers to fight for better treatment by the tech companies. “Development should not mean servitude,” he said. “And innovation should not mean exploitation, right?” 

We are now in the outskirts of Nairobi, where Mojez has brought us to meet his friend, Mercy Chimwani. She lives on the ground floor of the half-built house that she rents. There’s mud beneath our feet, and above you can see the rain clouds through a gaping hole where the unfinished stairs meet the sky. There’s no electricity, and when it rains, water runs right through the house. Mercy shares a room with her two girls, her mother, and her sister. 

It’s hard to believe, but this informal settlement without a roof is the home of someone who used to work for Meta. 

Mercy is part of the hidden human supply chain that trains AI. She was hired by what’s called a BPO, or a Business Process Outsourcing company, a middleman that finds cheap labour for large Western corporations. Often people like Mercy don’t even know who they’re really working for. But for her, the prospect of a regular wage was a step up, though her salary – $180 a month, or about a dollar an hour – was low, even by Kenyan standards. 

She started out working for an AI company – she did not know the name – training software to be used in self-driving cars. She had to annotate what’s called a “driveable space” – drawing around stop signs and pedestrians, teaching the cars’ artificial intelligence to recognize hazards on its own. 

And then, she switched to working for a different client: Meta. 

“On the first day on the job it was hectic. Like, I was telling myself, like, I wish I didn’t go for it, because the first image I got to see, it was a graphic image.” The video, Mercy told me, is imprinted on her memory forever. It was a person being stabbed to death. 

“You could see people committing suicide live. I also saw a video of a very young kid being raped live. And you are here, you have to watch this content. You have kids, you are thinking about them, and here you are at work. You have to like, deal with that content. You have to remove it from the platform. So you can imagine all that piling up within one person. How hard it is,” Mercy said. 

Silicon Valley likes to position itself as the pinnacle of innovation. But what they hide is this incredibly analogue, brute force process where armies of click workers relentlessly correct and train the models to learn. It’s the sausage factory that makes the AI sausage. Every major tech company does this – TikTok, Facebook, Google and OpenAI, the makers of ChatGPT. 

Mercy was saving to move to a house that had a proper roof. She wanted to put her daughters into a better school. So she felt she had to carry on earning her wage. And then she realised that nearly everyone she worked with was in the same situation as her. They all came from the very poorest neighborhoods in Nairobi. “I realised, like, yo, they’re really taking advantage of people who are from the slums.” she said. 

After we left Mercy’s house, Mojez took us to the Kibera informal settlement. “Kibera is the largest urban slum area in Africa, and the third largest slum in the entire world,”he told us as we drove carefully through the twisting, crooked streets. There were people everywhere – kids practicing a dance routine, whole families piled onto motorbikes. There were stall holders selling vegetables and live chickens, toys and wooden furniture. Most of the houses had corrugated iron roofs and no running water indoors.

Kibera is where the model of recruiting people from the poorest areas to do tech work was really born. A San Francisco-based organization called Sama started training and hiring young people here to become digital workers for Big Tech clients including Meta and Open AI.

Sama claimed that they offered a way for young Kenyans to be a part of Silicon Valley’s success. Technology, they argued, had the potential to be a profound equalizer, to create opportunities where none existed.

Mojez has brought us into the heart of Kibera to meet his friend Felix. A few years ago Felix heard about the Sama training school – back then it was called Samasource. He heard how they were teaching people to do digital work, and that there were jobs on offer. So, like hundreds of others, Felix signed up.

“This is Africa,” he said, as we sat down in his home. “Everyone is struggling to find a job.” He nodded his head out towards the street. “If right now you go out here, uh, out of 10, seven or eight people have worked with SamaSource.” He was referring to people his age – Gen Z and young millennials – who were recruited by Sama with the promise that they would be lifted out of poverty. 

And for a while, Felix’s life was transformed. He was the main breadwinner for his family, for his mother and two kids, and at last he was earning a regular salary.

But in the end, Felix was left traumatized by the work he did. He was laid off. And now he feels used and abandoned. “There are so many promises. You’re told that your life is going to be changed, that you’re going to be given so many opportunities. But I wouldn’t say it’s helping anyone, it’s just taking advantage of people,” he said.

When we reached out to Sama, a PR representative disputed the notion that Sama was taking advantage and cashing in on Silicon Valley’s headlong rush towards AI. 

Mental health support, the PR insisted, had been provided and the majority of Sama’s staff were happy with the conditions.“Sama,” she said, “has a 16-year track record of delivering meaningful work in Sub-Saharan Africa, lifting nearly 70,000 people out of poverty.” Sama eventually cancelled its contracts with Meta and OpenAI, and says it no longer recruits content moderators. When we spoke to Open AI, which has hired people in Kenya to train their model, they said that they believe data annotation work needed to be done humanely. The efforts of the Kenyan workers were, they said, “immensely valuable.”

You can read Sama’s and Open AI’s response to our questions in full below. Meta did not respond to our requests for comment.

Sama response

Based on the statements you shared below, we want to be clear that Sama vehemently disputes them, including that the company did not provide adequate mental health support.

Mental health services were provided on site by fully-licensed professionals. These services were (and still are) available at all times employees are working. When Sama employed content moderators, the company mandated (at minimum) one group and one 1:1 session each month for moderators, while team members also had unlimited access to 1:1 sessions. Sama counselors consistently walked the production floor to be readily available to individuals. Employees are provided full medical benefits, and for content moderators, those benefits were available starting on day one of their employment. These benefits include access to psychological and/or psychiatric care outside of Sama for all employees.

Onboarding process: All employees underwent a rigorous, thorough evaluation process before officially starting work at Sama. At any given time in that process, employees had the choice to opt out – in fact, there were four specific points in time during the evaluation process where employees had to give express permission to continue.

NDAs: All Sama employees sign NDAs which is a common practice all around the world for the nature of the work we do. These NDAs do not prevent employees from speaking to mental health professionals.

To be clear, the content moderation work that Sama did was for one client only. We took on one small pilot project for a couple of months on behalf of another client, but we exited that pilot project early because it was not in line with our work. All content moderation work was fully exited by March 2023. There is zero correlation between our decision to exit content moderation and employee complaints, which only happened after we had fully exited the business.

Shifting to for profit status: The Sama impact model is based on the notion that talent is distributed equally, but opportunity is not. For 15 years, we’ve proven that adage is true, and our for-profit status has allowed us to attract additional business and investment, leading to expanding our workforce. We’ve proven that a for-profit model can still be rooted in impact and be highly effective. The nature of the work Sama does did not change when we switched to for profit status.

Sama mission and employee satisfaction: Sama has a 16 year track record of delivering meaningful work in Subsaharan Africa, lifting nearly 70,000 people out of poverty. Sama currently employs over 3,000 individuals in Kenya and is one of the only data annotation companies that offers full-time employment contracts with a guaranteed base salary and benefits. The vast majority of our employees report positive experiences with Sama, including a recent, anonymous employee satisfaction survey which reported:
A 68% overall satisfaction rate by employees on the production side.
78% saying they believe Sama prioritizes well-being
54% are happy with salary
61% are happy with benefits

Despite their defense of their record, Sama is facing legal action in Kenya. 

“I think when you give people work for a period of time and those people can’t work again because their mental health is destroyed, that doesn’t look like lifting people out of poverty to me,” said Mercy Mutemi, a lawyer representing more than 180 content moderators in a lawsuit against Sama and Meta. The workers say they were unfairly laid off when they tried to lobby for better conditions, and then blacklisted.

Open AI response

OpenAI response. “Our mission is to build safe and beneficial AGI, and collecting human feedback is one of our many streams of our work to guide the models toward safer behavior in the real world

We believe this work needs to be done humanely and willingly, which is why we establish and share our own ethical and wellness standards for our data annotators. We recognize this was a challenging project for our researchers and annotation workers in Kenya and around the world—their efforts to ensure the safety of AI systems has been immensely valuable.”

“You’ve used them,” Mutemi said. “They’re in a very compromised mental health state, and then you’ve dumped them. So how did you help them?” 

As Mutemi sees it, the result of recruiting from the slum areas is that you have a workforce of disadvantaged people, who’ll be less likely to complain about conditions.

“People who’ve gone through hardship, people who are desperate, are less likely to make noise at the workplace because then you get to tell them, ‘I will return you to your poverty.’ What we see is again, like a new form of colonization where it’s just extraction of resources, and not enough coming back in terms of value whether it’s investing in people, investing in their well-being, or just paying decent salaries, investing in skill transfer and helping the economy grow. That’s not happening.” 

“This is the next frontier of technology,” she added, “and you’re building big tech on the backs of broken African youth.”

At the end of our week in Kenya, Mojez takes us to Karura forest, the green heart of Nairobi. It’s an oasis of calm, where birds, butterflies and monkeys live among the trees, and the rich red earth has that amazing, just-rained-on smell. He comes here to decompress, and to try to forget about all the horrific things he’s seen while working as a content moderator. 

Mojez describes the job he did as a digital worker as a loss of innocence. “It made me think about, you know, life itself, right? And that we are alone and nobody’s coming to save us. So nowadays I’ve gone back to how my ancestors used to do their worship — how they used to give back to nature.” We’re making our way towards a waterfall. “There’s something about the water hitting the stones and just gliding down the river that is therapeutic.”

For Mojez, one of the most frightening things about the work he was doing was the way that it numbed him, accustomed him to horror. Watching endless videos of people being abused, beheaded, or tortured – while trying to hit performance targets every hour – made him switch off his humanity, he said.

A hundred years from now, will we remember the workers who trained humanity’s first generation of AI? Or will these 21st-century monuments to human achievement bear only the names of the people who profited from their creation?

Artificial intelligence may well go down in history as one of humanity’s greatest triumphs.  Future generations may look back at this moment as the time we truly entered the future.

And just as ancient monuments like the Colosseum endure as a lasting embodiment of the values of their age, AI will embody the values of our time too.  

So, we face a question: what legacy do we want to leave for future generations? We can’t redesign systems we refuse to see. We have to acknowledge the reality of the harm we are allowing to happen.  But every story – like that of Mojez, Mercy and Felix –- is an invitation. Not to despair, but to imagine something better for all of us rather than the select few.

Christopher Wylie and Becky Lipscombe contributed reporting. Our new audio series on how Silicon Valley’s AI prophets are choosing our future for us is out now on Audible.