Echoing US, the UK government goes all in on migrant surveillance

Frankie Vetch


The U.K. continues to expand its use of facial recognition technology, despite a landmark decision two years ago when the Court of Appeal ruled that South Wales police’s use of facial recognition technology breached privacy rights, data protection laws and equality laws. 

Earlier this month, it was revealed that the U.K. government had signed a £6 million contract with private manufacturer Buddi Limited to use facial recognition smart watches to monitor so-called “foreign national offenders.”

This effectively means that people without British citizenship who have been convicted of a criminal offense in the U.K. will have to scan their faces on the watches up to five times a day, with their locations being tracked 24/7. A manual check will be conducted if the photos taken do not match with the person’s biometric facial image. The name, date of birth, nationality and photographs will be stored for up to six years in a government database. The technology is reminiscent of the SmartLINK app used in the U.S. to monitor immigrants, which we have reported on before.

Buddi Limited, the company that was awarded the contract, is solely owned by a company called Big Technologies Plc. The non-executive chairman of Big Technologies is Simon Jeremy Collins, who owns, alongside what appears to be his wife, Mrs Simone Collins, Simon J Collins and Associates Limited. 

In November 2019, this company donated £50,000 to the Conservative Party. During the pandemic, billions of pounds worth of contracts were given to companies that were run by friends or associates of Conservative Party politicians. 

Convicted criminals aside, migrants who enter the country through irregular routes are also subject to invasive surveillance technologies. These include GPS-enabled tracking devices, as we reported last week.

A Home Office spokesperson told me over the phone that people who are convicted of entering the U.K. illegally would be included in the category of “foreign national offenders.”

Monish Bhatia, a lecturer in criminology at Birkbeck University who specializes in migration, says, “We know these technologies easily cross over. Whatever technology is used within criminal justice crosses over to immigration eventually.”

Under the U.K.’s new Nationality and Borders bill, Bhatia adds, “if you are caught breaching an immigration regulation or just violating immigration rules, you can be prosecuted under criminal law.” This includes refugees and asylum seekers, which means it is likely that the technology could be used on people who flee to the U.K. to escape oppression and even war.

The U.K. is also expanding the use of facial recognition technology in its border controls. As part of a trial beginning next year, visitors from Bahrain, Kuwait, Oman, Qatar, Saudi Arabia, and the UAE will have to upload a photograph of their face during a pre-screening process, which will be cross-checked with a biometric face scan when they arrive in the U.K. 

This initiative, which will eventually be expanded to all visitors, will enable people to pass through the border without interacting with a border official. It has been described by Home Secretary Priti Patel as part of a plan to take back control of the U.K.’s immigration system.

While advocates of facial recognition technology admire its potential for increasing efficiency, privacy rights groups fear how the data could be utilized. Sam Grant, Head of Policy and Campaigns at Liberty, told me over email that “facial recognition does not make people safer, it entrenches patterns of discrimination and sows division.” 

It is, he wrote, “impossible to regulate for the dangers created by a technology that is oppressive by design. The safest, and only, thing to do with facial recognition is to ban it.”


Facebook is making money off of white supremacist content. Though the company claimed it had taken steps to ban hate groups from the platform last year, extremism continues to flourish on the social media network and helps it turn a profit, according to a recent investigation by the Tech Transparency Project. Researchers uncovered dozens of white supremacist groups still active on the platform, including ones already designated by Facebook as “dangerous organizations.” They also found that Facebook routinely monetized searches for groups linked to extremism, running ads in more than 40% of searches for white supremacist groups. 

Disturbingly, some ‌ads appeared to place a spotlight on targets for extremist threats. Searches for groups with “Ku Klux Klan” in their names surfaced ads for black churches — a “chilling result given reports that the gunman who killed 10 people in the racially motivated mass shooting in Buffalo, New York, did research to pick a target neighborhood with a high ratio of Black residents,” the report’s authors write. Of course, Facebook isn’t the only platform where hatred continues to spread unchecked. A few months ago, we covered YouTube’s quiet extremism problem, and why moderating violent and conspiratorial speech has proven so difficult for the global streaming giant.

The Solomon Islands has signed a massive phone tower deal with Chinese telecommunications giant Huawei. The deal — made possible by a $100 million loan from China — will see the construction of 161 mobile phone towers around the Pacific island nation, tying a large part of the country’s communications infrastructure to Huawei. The partnership is increasing concerns about China’s rising influence in the Pacific island nation following the establishment of diplomatic ties between the two countries in 2019. In April, China and the Solomon Islands signed a controversial security agreement that would allow China to have troops based in the country, raising tensions with Australian lawmakers worried about Beijing’s regional sway. Currently, Huawei is banned from building 5G networks in Australia, Canada, and the United States due to national security concerns. 

American lawmakers are pushing back on the government’s plans to expand the “digital border wall.” A coalition of U.S. representatives published a letter last week urging Congressional leaders to scale back investments in surveillance technology in the government’s proposed budget for the Department of Homeland Security. As we’ve reported, lawmakers in recent years have invested hundreds of millions into surveillance technology to deter migrants from crossing the U.S.-Mexico border, blanketing the region with video cameras, surveillance towers, underground sensors, drones, and facial recognition cameras.

Now, the Biden administration is asking for upwards of $1 billion in funding for border surveillance technology in 2023, according to a budget analysis by the immigration legal firm Just Futures Law. That’s on top of the $425 million the government already poured into border surveillance last year. Proponents of the technology, including leading Democrats, say a digital wall is more humane and effective than a border wall. But our reporting has found that the program harms border crossers and U.S. residents alike, funneling migrants into treacherous routes through the desert and exposing border communities to persistent and intrusive surveillance. And it’s not just the U.S. government lavishing funds on migrant surveillance, as my colleague Frankie Vetch explains below.


Mozilla’s 2022 Internet Health Report is all about artificial intelligence and, luckily for those of you tired of staring at screens all day, this year it’s taken the form of a five-episode podcast series. Topics range from the use of AI in warfare to “spatial apartheid” in South Africa. Learn more here.

This newsletter is curated by Coda’s senior reporter Erica Hellerstein. Liam Scott and Rebekah Robinson contributed to this edition.