Tech platforms run from Silicon Valley, and the handful of men behind them, often seem and act invincible. But a legal battle in Kenya is setting an important precedent for disrupting the Big Tech’s strategy of obscuring and deflecting attention from the effect their platforms have on democracy and human rights around the world.
Kenya is hosting unprecedented lawsuits against Meta Inc., the parent company of Facebook, WhatsApp, and Instagram. Mercy Mutemi, who made last year’s TIME 100 list, is a Nairobi-based lawyer who is leading the cases. She spends her days thinking about what our consumption of digital products should look like in the next 10 years. Will it be extractive and extortionist, or will it be beneficial? What does it look like from an African perspective?
The conversation with Mercy Mutemi has been edited and condensed for clarity.
Isobel Cockerell: You’ve described this situation as a new form of colonialism. Could you explain that?
Mercy Mutemi: From the government side, Kenya’s relationship with Big Tech, when it comes to annotation work, is framed as a partnership. But in reality, it’s exploitation. We’re not negotiating as equal partners. People aren’t gaining skills to build our own internal AI development. But at the same time, you’re training all the algorithms for all the big tech companies, including Tesla, including the Walmarts of this world. All that training is happening here, but it just doesn’t translate into skill transfer. It’s broken up into labeling work without any training to broaden people’s understanding of how AI works. What we see is, again, like a new form of colonization where it’s just extraction of resources, with not enough coming back in terms of value, whether it’s investing in people, investing in their growth and well-being, just paying decent salaries and helping the economy grow, for example, or investing in skill transfer. That’s not happening. And when we say we’re just creating jobs in the thousands, even hundreds of thousands, if the jobs are not quality jobs, then it’s not a net benefit at the end of the day. That’s the problem.
IC: Behind the legal battle with Meta are workers and their conditions. What challenges do they face in these tech roles, particularly content moderation?
MM: Content moderators in Kenya face horrendous conditions. They’re often misled about the nature of the work, not warned that the work is going to be dangerous for them. There’s no adequate care provided to look after these workers, and they’re not paid well enough. And they’ve created this ecosystem of fear — it’s almost like this special Stockholm syndrome has been created where you know what you’re going through is really bad, but you’re so afraid of the NDA that you just would rather not speak up.
If workers raise issues about the exploitation, they’re let go and blacklisted. It’s a classic “use and dump” model.
IC: What are your thoughts on Kenya being dubbed the “Silicon Savannah”?
MM: I do not support that framing, just because I feel like it’s quite problematic to model your development after Silicon Valley, considering all the problems that have come out of there. But that branding has been part of Kenya’s mission to be known as a digital leader. The way Silicon Valley interprets that is by seeing Kenya as a place where they can offload work they don’t want to do in the U.S. Work that is often dangerous. I’m talking about content moderation work, annotation work, and algorithm training, which in its very nature involves a lot of exposure to harmful content. That work is dumped on Kenya. Kenya says it’s interested in digital development, but what Kenya ends up getting is work that poses serious risks, rather than meaningful investment in its people or infrastructure.
IC: How did you first become interested in these issues?
MM: It started when I took a short course on the law and economics of social media giants. That really opened my eyes to how business models are changing. It’s no longer just about buying and selling goods directly—now it’s about data, algorithms, and the advertising model. It was mind-blowing to learn how Google and Meta operate their algorithms and advertising models. That realization pushed me to study internet governance more deeply.
IC: Can you explain how data labeling and moderation for a large language model – like an AI chatbot – works?
MM: When the initial version of ChatGPT was released, it had lots of sexual violence in it. So to clean up an algorithm like that, you just teach it all the worst kinds of sexual violence. And who does that? It’s the data labelers. So for them to do that, they have to consume it and teach it to the algorithm. So what they needed to do is consume hours of text of every imaginable sexual violence simulation, like a rape or a defilement of a minor, and then label that text. Over and over again. So then, what the algorithm knows is, okay, this is what a rape looks like. That way, if you ask ChatGPT to show you the worst rape that could ever happen, there are now metrics in place that tell it not to give out this information because it’s been taught to recognize what it’s being asked for. And that’s thanks to Kenyan youth whose mental health is now toast, and whose life has been compromised completely. All because ChatGPT had to be this fancy thing that the world celebrated. And Kenyan youth got nothing from it.
This is the next frontier of technology, and they’re building big tech on the backs of broken African youth, to put it simply. There’s no skill transfer, no real investment in their well-being, just exploitation.
IC: But workers aren’t working directly for the Big Tech companies, right? They’re working for these middlemen companies that match Big Tech companies with workers — can you explain how that works?
MM: Big Tech is not planting any roots in the country when it comes to hiring people to moderate content or train algorithms for AI. They’re not really investing in the country in the sense that there’s no actual person to hold liable should anything go south. There’s no registered office in Kenya for companies like Meta, TikTok, OpenAI. And really, it’s important that companies have a presence in a country so that there can be discussions around accountability. But that part is purposely left out.
Instead, what you have are these middlemen. They’re called Business Process Outsourcing, or BPOs, that are run from the U.S., not run locally, but they have a registered office here, and a presence here. A person that can be held accountable. And then what happens is big tech companies negotiate these contracts with the business. So for example, I have clients who worked for Meta or OpenAI through a middleman company called Sama, or who worked for Meta through another called Majorel, or those who worked for Scale AI but through a company called RemoTasks.
It’s almost like they’re agents of big tech companies. So they will do big tech’s bidding. If the big tech says jump, then they jump. So we find ourselves in this situation where these companies purely exist for the cover of escaping liability.
And in the case of Meta, for example, when recruitments happen, the advertisements don’t come from Meta, they come from the middleman. And what we’ve seen is purposeful, intentional efforts to hide the client, so as not to disclose that you’re coming to do work for Meta… and not even being honest or upfront about the nature of the work, not even saying that this is content moderation work that you’re coming to do.
IC: What are the repercussions of this on workers?
MM: Their mental health is destroyed – and there are often no measures in place to protect their well-being or respect them as workers. And then it’s their job to figure out how to get out of that rut because they still are a breadwinner in an African context, and they still have to work, right? And in this community where mental health isn’t the most spoken-about thing, how do you explain to your parents that you can’t work?
I literally had someone say that to me—that they never told their parents what work they do because how do you explain to your parents that this is what you watch, day in, day out? And that’s why it’s not enough for the government to say, “yes, 10,000 more jobs.” You really do have to question what the nature of these jobs is and how we are protecting the people doing them, how we are making sure that only people who willingly want to do the job are doing it.
IC: You said the government and the companies themselves have argued that this moderation work is bringing jobs to Kenya, and there’s also been this narrative that — almost like an NGO – these companies are helping lift people out of poverty. What do you say to that?
MM: I think when you give people work for a period of time and those people can’t work again because their mental health is destroyed, that doesn’t look like lifting people out of poverty to me. That looks like entrenching the problem further because you’ve destroyed not just one person, but everybody that relies on that person and everybody that’s now going to be roped in, in the care of that one person. You’ve destroyed a bigger community that you set out to help.
IC: Do you feel alone in this fight?
MM: I wouldn’t say I’m alone, but it’s not a popular case to take at this time. Many people don’t want to believe that Kenya isn’t really benefiting from these big tech deals. It’s not a narrative that Kenyans want to believe, and it’s just not the story that the government wants at the end of the day. So not enough questions are being asked. No one’s really opening the curtain to see what is this work? Are our local companies benefiting out of this? Nobody’s really asking those questions. So then in that context, imagine standing up to challenge those jobs.
IC: Do you think it’s possible for Kenya to benefit from this kind of work without the exploitation?
MM: Let me just be very categorical. My position is not that this work shouldn’t be coming into Kenya. But it can’t be the way it is now, where companies get to say “either you take our work and take it as horrible as it is with no care, and we exploit you to our satisfaction, or we, or we leave.” No. You can have dangerous work done in Kenya, but with appropriate level of care, with respect, and upholding the rights of these workers. It’s going to be a long journey to achieve justice.
IC: In September, the Kenyan Court of Appeal made a ruling — that Meta, a U.S. company, can be sued in Kenya. Can you explain why this is important?
MM: The ruling by the Court of Appeal brings relief to the moderators. Their case at the Labour Court had been stopped as we awaited the decision by the Court of Appeal on whether or not Meta can be sued in Kenya by former Facebook Content Moderators. The Court of Appeal has now cleared the path for the moderators to present their evidence to the court against Meta, Sama and Majorel for human rights violations. They finally get a chance at a fair hearing and access to justice.
The Court of Appeal has affirmed the groundbreaking decision of the Labour Court that it in today’s world, digital workspaces are adequate anchors of jurisdiction. This means that a court can assume jurisdiction based on the location of an employee working remotely. That is a timely decision as the nature of work and workspaces has changed drastically.
What this means for Meta is that they now have a chance to fully participate in the suit against them. What we have seen up to this point is constant dismissiveness of the authority of Kenyan courts over Meta claiming they cannot be sued in Kenya. The Court of Appeal has found that they not only can be sued but are properly sued in these cases. We look forward to participating in the legal process fully and presenting our clients’ case to the court for a fair determination.
Correction: This article has been updated to reflect that the Court of Appeal ruling was in regard to the case of 185 former Facebook content moderators, not a separate case of Mutemi’s brought by two Ethiopian citizens.