Smoking gun evidence of state surveillance tech in the U.S. and Mexico

Ellery Roberts Biddle


Authoritarian governments around the world are well known for their mass surveillance technologies. But it’s not just the West’s favorite bad guys, like China and Russia, who take advantage of these capabilities. 

The U.S. is deep in the game too. The American Civil Liberties Union recently won the release of thousands of documents, as part of an ongoing lawsuit against the FBI, that detail the role played by U.S. officials in the building of a mass surveillance apparatus. Working with academic researchers, the FBI and the U.S. Department of Defense wanted to develop facial recognition technology to watch and follow millions of people, some at “target distances” of more than half a mile, without their knowledge. The system was ultimately folded into a tool operated by the Pentagon’s Combating Terrorism Technical Support Office, which provides military technologies to civilian police forces across the country. 

These are serious revelations, especially since we know that police in New Jersey, Michigan, Louisiana and a handful of other states have wrongfully arrested people — most of them people of color  — on the basis of a bad match by a facial recognition algorithm. We also know from independent research and a massive, federally-funded study that these tools have higher error rates for people with darker skin and for women. 

Legislators in a handful of cities and states in the U.S. have started trying to rein in the use of such technology, but it hasn’t been easy, especially given how fast the industry is moving. For a closer look at the push and pull of these proposals in cities across the country, check out this piece by my colleague Erica Hellerstein.

If you’re worrying about this in the U.S and need a quick solution, I’d suggest masking up. But this might not work if you’re in New York City. A NYPD official recently suggested that businesses start asking shoppers to remove their masks when they walk in so that surveillance cameras can adequately capture their faces.

Mexico had some major surveillance revelations this week too. For years, there’s been evidence that journalists, human rights defenders and opposition politicians in Mexico have been targeted with Pegasus, the pernicious mobile spyware made by Israel’s NSO Group. But now they have a smoking gun, thanks to a trove of internal documents from the military and Mexico’s secretary of defense that became public this week. Obtained by the hacker group Guacamaya and vetted by local legal and technical experts, the documents provide a paper trail indicating that the military was using the spyware as recently as 2020, despite having no legal standing to intercept private communications. Documents prove that agents used software to spy on Raymundo Ramos, a human rights defender who had been working to expose the extrajudicial killings of three young people in Nuevo Laredo in July 2020.

One of the expert groups, R3D, has a nice walk-through of the documents (in Spanish), and there has been some comprehensive media coverage in both Spanish and English. Over the past decade, researchers and advocates have worked to document the evidence of these tools being bought and used to target people for political purposes. But who actually buys it and does the targeting? And what exactly are they trying to achieve? In Mexico, these questions are finally getting answers. And they are chilling.


EU policymakers are rushing to finalize the AI Act, a landmark regulation that will help determine what kinds of technologies can be developed and sold within the bloc on the basis of how risky these technologies are for the public interest. By the end of March, when the regulation is finally expected to be set in stone, it will be a big deal for the entire world. A significant player on the global stage will have set some boundaries around what kind of technology is safe enough to build and sell, and what kinds aren’t.

It will surely have problems — one of the most contentious issues on the table has been the question of border surveillance and whether the regulation will protect the rights of people seeking refuge in the EU — but it seems like it will put some onus on tech companies to assess the risks of their products before releasing them into the wild.

Right now, we’re still operating in a nearly lawless global marketplace where anyone who’s clever enough to cook up a hot new idea for a tech product, round up the engineers to build it and convince some rich people to fund it has little by way of regulation standing in their way, no matter how harmful their product might be. This is why Pegasus thrives and why facial recognition tools land innocent people in jail. 

Plenty of governments –— authoritarian, democratic and everything in between — are using these tools with no real due diligence or accountability. Imperfect as the EU legislation will likely be, the AI Act will establish a standard that attempts to protect the public interest. And while the act will fall outside of the jurisdictions where the world’s most powerful tech companies are headquartered (China and the U.S.), it will probably inflict some pressure on global tech markets that could pull those companies in a slightly more responsible direction.


  • Anti-Black hate speech and attacks on people from sub-Saharan Africa in Tunisia have spiked since Tunisian president Kais Saied delivered a speech riddled with racist remarks about Black Africans. Middle East Eye is tracking how the issue is playing out on social media.
  • Our friends at Lighthouse Reports published a major investigation of algorithms used by cities and states to try to detect fraud within their welfare systems. The series shows how many of these systems, which affect the lives of some of the most marginalized people in society, are biased on characteristics like ethnicity, age, gender and parenthood.
  • And there’s been more buzz about a ban on TikTok in the U.S. this week, which as usual doesn’t seem especially well grounded in information about the technology itself. Yameen Huk has an interesting take for Inkstick Media on what legislators could do to regulate it, instead of just censoring it outright.