EU’s weak AI law sets a low bar for global facial recognition regulations

The Artificial Intelligence Act would be the first legal framework on the use of AI in the world, but it creates toothless standards for restrictions of biometric surveillance

The European Commission, which is widely recognized as setting the global standard for data protection, has come under fire for proposing toothless regulations of artificial intelligence and biometric surveillance. The law raises fears that by setting a low baseline for privacy protections, governments around the world will fail to enact adequate safeguards for their own citizens. 

If technology is sold on the basis of this law, said Caitlin Bishop, a campaign manager at Privacy International, “that’s a problem.”

Critical reaction to the European Commission’s proposed Artificial Intelligence Act was sparked on April 14 by a leak in Politico. The draft law is the first legal framework on the use of AI in the world, and it also creates standards for the use of remote biometric mass surveillance tools in public spaces. However, there are significant exemptions and it does not outlaw facial recognition, as many had deemed essential to be effective.

An important advisor to the European Commission on privacy issues, known as the European Data Protection Supervisor, called for an outright ban on facial recognition. 

Experts argue that the draft legislation is not sufficiently focused on privacy rights. “It feels to us that the regulation, more than anything, determines what AI can be sold, and focuses more on the relationship between the people that are kind of developing the AI and deploying the AI than the people who will be surveilled,” said Bishop. 

The legislation does not cover private companies’ use of facial recognition, for example. 

“Face recognition isn’t less concerning, invasive or harmful when it’s being used by private companies,” Bishop said. “In fact, sometimes it’s more so, because you have fewer rights when it comes to your interactions with those companies.” 

It is also unclear whether biometric surveillance tools like facial recognition, gait recognition (technology that identifies people by their walk) or emotion recognition can be used by public entities other than law enforcement, like welfare services or transportation authorities.

The European Commission’s proposal only provides narrow restrictions in live, real-time facial recognition, explained Albert Fox Cahn, the founder and executive director of Surveillance Transparency Oversight Project. 

“But there is no ban on historical facial recognition where you take a crime scene photo or CCTV photo and run it through facial recognition, which is the dominant form of facial recognition being used around the world,” he explained. 

The proposal has been met with frustration by member states, as well. 

In a letter addressed to the European Commission, 40 Members of the European Parliament criticized an important exemption that allows public authorities, and potentially private companies acting on their behalf, to use AI-enabled biometric surveillance tools “in order to safeguard public security.” 

European regulations have been used as a yardstick for other privacy legislation from California to Uganda. Similar conversations about regulating facial recognition are happening in parallel around the world. On May 7, a court in São Paolo, Brazil blocked the use of the technology on the city’s metro. 

The story you just read is a small piece of a complex and an ever-changing storyline we are following as part of our coverage. These overarching storylines — whether the disinformation campaigns that are feeding the war on truth or the new technologies strengthening the growing authoritarianism, are the crises that Coda covers relentlessly and with singular focus. But we can’t do it without your help. Support journalism that stays on the story. Coda Story is a 501(c)3 U.S. non-profit. Your contribution to Coda Story is tax deductible.

Support Coda