Big Tech looks the other way on Saudi Arabia’s human rights abuses

Ellery Roberts Biddle



A young woman in Saudi Arabia is in jail for using Twitter and Snapchat to advocate for an end to the country’s male guardianship rules and for failing to wear “decent” clothes. It came to light last month that Manahel al-Otaibi, 29, has been in pre-trial detention since November 2022. But as is too often the case with Saudi Arabia, it took months for the story to reach international media, and publicly available details on al-Otaibi’s case remain scarce. And since details from court proceedings are almost never made public in Saudi Arabia, we can only speculate about what evidence will actually be brought against al-Otaibi. But it is conceivable that prosecutors will ask one or both companies to hand over her private data — a standard move in cases like these.

What kind of data might the Saudi government expect companies to hand over? The answer to that question may be up in the air, on account of the fact that some of the biggest U.S.-based tech companies have drastically increased their presence in Saudi Arabia in recent years. Last month, Microsoft announced plans to establish a cloud computing center there, a move that was swiftly condemned by advocates in the Gulf region. And Microsoft is late to the party — in 2018, just a few months after Washington Post columnist Jamal Khashoggi was murdered and dismembered at the Saudi consulate in Istanbul, Google signed a memorandum of understanding with Saudi Aramco, the state oil giant, to build a “cloud region” on Saudi soil. The deal became public in 2020, in a rather vanilla blog post touting the benefits of cloud infrastructure for enterprise. 

When advocacy groups asked what the cloud region would mean for people’s data and privacy in the region, Google responded with a mostly boilerplate letter, in which it offered only this assurance: “An independent human rights assessment was conducted for the Google Cloud Region in Saudi Arabia, and Google took steps to address matters identified as part of that review.”

Did Google really identify human rights-related “matters” that were so uninteresting that it wasn’t worth saying what the heck they found? This is Saudi Arabia after all.

It reminded me of how Saudi officials initially responded when people asked if they had anything to do with Khashoggi’s murder. They said something to the effect of “we didn’t do it. Just trust us.” If it sounds like a leap to link Google’s data centers with Khashoggi’s murder, remember (or learn) that Saudi intelligence did a whole lot of digital spying on Khashoggi and his contacts before his killing. This government has no qualms about engaging in hardcore surveillance and is even willing to infiltrate a major U.S. tech company — like it did at Twitter in 2015 — to get it done.

Google staffers later explained to us that cloud regions offer infrastructure for all kinds of companies (what they refer to as “enterprise”), often in arrangements where Google actually doesn’t have special access to what’s happening on the platform. But does the company still have some responsibility to safeguard data that it hosts, even when that hosting is happening through its “enterprise” services? Maybe so. 

My friend Mohamad Najem, who runs the Beirut NGO SMEX, put it to me this way at RightsCon last week: “The main question to Google is, ‘We know you’re doing this, but do you want your data center to be affiliated with any potential attack that might happen on netizens?’” Najem argues that the company ought to have some way of auditing its systems to protect against indiscriminate surveillance.
“We analyzed the laws for them,” he said with a shrug, referring to SMEX’s research on personal data-related laws in Saudi Arabia. “We showed them that there is no data protection in these countries. In all cases, you’re putting people’s data at risk.”


Last week, I wrote about how the stuff of Hong Kong’s once-rich intellectual and civic life is disappearing from the internet. More may vanish soon, if local officials beholden to Beijing get their way.

The Hong Kong government wants YouTube to censor 32 videos featuring “Glory to Hong Kong,” a song exalting an independent, free Hong Kong that became the anthem of Hong Kong’s 2019 pro-democracy movement. Last week, the Ministry of Justice petitioned the High Court to issue an injunction that would prohibit “broadcasting, performing, printing, publishing, selling, offering for sale, distributing, disseminating, displaying or reproducing in any way” the song, which is considered an affront to Beijing’s increasing power over the once relatively autonomous city-state. Google, the owner of YouTube, has further angered Hong Kong officials because the song apparently often appears at the top of search engine results when one enters “Hong Kong national anthem.” Officials have called on the company to change the search results and put the actual anthem at the top instead.

Back in 2010, Google set up shop in Hong Kong as a way to maintain a presence in greater China without being subject to Beijing’s censorship demands. But times have changed. Apart from the song debacle, Google is hedging its bets with Bard, its AI chatbot, which it has not made available in Hong Kong so far. OpenAI has also held ChatGPT back in Hong Kong. The Wall Street Journal reported that this may be due to fears that such products could run afoul of Chinese laws that criminalize criticism of the government.

But this hasn’t stopped OpenAI CEO Sam Altman from “reaching out” to China. As part of his ongoing global roadshow, Altman made a virtual appearance at a conference hosted by the Beijing Academy of Artificial Intelligence last weekend where he made vague calls for “global cooperation” on AI and praised China for having “some of the best AI talents in the world.” Altman’s probably right on that last point, but between China’s Great Firewall and the ongoing tech trade war between the U.S. and China, I’m not sure what he imagines when he talks about “cooperation.” In case you’ve missed it, Altman has also managed to squeeze in meetings with Indian Prime Minister Narendra Modi, French President Emmanuel Macron and South Korean President Yoon Suk Yeol.

A cash-assistance algorithm funded by the World Bank and deployed in Jordan has serious flaws ranging from coding errors to automating discrimination on the basis of characteristics like gender, according to a new report from Human Rights Watch. Jordan isn’t the only country where the World Bank is pushing tech on the welfare sector. Seven other countries in the Arab region are using the system, also at the World Bank’s behest. Amos Toh, who wrote the report, told me that the World Bank has similar projects underway in other regions. Development Pathways, a Swedish social policy think tank, has documented high exclusion error rates across 29 poverty-targeting programs beyond the MENA region.


  • New research from the Center for Countering Digital Hate shows that Google is allowing anti-abortion groups to purchase deceptive ads in its search engine that direct users to “crisis pregnancy centers” — clinics that steer people away from abortion — by targeting common search terms like “abortion clinic near me.”
  • The internet shutdowns that followed the arrest of former Pakistani Prime Minister Imran Khan last month have precipitated serious losses for the country’s tech sector. Zuha Siddiqui has the story for Rest of World.
  • My new favorite letter about AI was written by Prabha Kannan for The New Yorker’s Daily Shouts section and makes some terrific points, all in the form of a mock open letter a la Center for AI Safety. “This letter serves as a warning that the human race will definitely be wiped out because of the humanlike A.I. systems that the undersigned are responsible for developing and releasing into the world.” Wink. Strangely enough, The Onion has a piece with a remarkably similar flavor this week. What a world we live in.