Artificial intelligence is creeping into every aspect of our lives. AI-powered software is triaging hospital patients to determine who gets which treatment, deciding whether an asylum seeker is lying or telling the truth in their application and even conjuring up weird conceits for sitcoms. Just lately, these kinds of tools have been helping killer robots select their targets in the war in Ukraine. AI systems have been proven to carry systemic biases again and again, but their increasing centrality to the way we live makes those debates even more urgent.  

In typical tech fashion, AI-driven tools are advancing much faster than the laws that could theoretically govern them. But the European Union, the world’s de facto tech watchdog, is working to catch up, with plans to finalize the billboard AI Act this year. 

The use of AI in surveillance and monitoring technology is one of the hot button issues that is bedeviling ongoing negotiations. Software used by law enforcement and border agencies is increasingly reliant on things like facial recognition and social media scraping tools that amass vast stores of people’s data and use this information to make decisions about whether or not they should be allowed to cross a border or how long they must remain incarcerated. 

The EU’s draft regulation is premised on the fact that systems like these can present significant risks to people’s rights and well-being. This is especially true when they’re built by private companies that like to keep their code under lock and key.

The AI Act aims to establish a framework for assessing the relative riskiness of different kinds of AI systems, dividing them into four tiers: unacceptable risk products, such as China-style social credit scores, which would be banned outright; high-risk tools like welfare subsidy systems and surveillance tech software; limited risk systems like chatbots; and minimal risk systems such as email spam filters.

But it has some surprising omissions. Dutch parliamentarian Kim Van Sparrentak, who represents the Green Party, was quick to note that the European Council has tried to create carve-outs that would allow law enforcement and immigration agencies to keep using a wide range of these tools, despite their proven risks. In early December, more than 160 civil society organizations issued a statement expressing concern that the law doesn’t account for AI use at the border and unfairly impacts those already on the margins of society, such as refugees and asylum seekers.

“The risk is that we create a world where we continue to believe in the Kool-Aid of AI, and won’t have the right system in place to make sure AI doesn’t inflict [harm] on our fundamental rights,” said Van Sparrentak. 

The AI Act may also run into enforcement challenges. The regulation will apply mainly to companies or other entities developing and designing AI systems — not to public authorities or other institutions that use them. For example, a facial recognition system could have vastly different implications depending on whether it’s used in a consumer context (i.e., to recognize your face on Instagram) or at a border crossing to scan people’s faces as they enter a country. 

“We are arguing that a lot of the potential risks or adverse impacts of AI systems depend on the context of use,” said Karolina Iwanska, a digital civic space advisor at the European Center for Not-for-Profit Law in the Hague. “That level of risk seems different in both of these circumstances, but the AI Act primarily targets the developers of AI systems and doesn’t pay enough attention to how the systems are actually going to be used,” she told me.

Although there has been plenty of discussion of how the draft regulation will — or will not — protect people’s rights, this is only part of the picture. According to Michael Veale, a University College London professor who specializes in digital rights, “the AI Act has to be understood for what it is: a legislative and market instrument.” The reason the European Commission is acting here, said Veale, is that member states have made different, varying laws at a national level around AI, which create barriers to trade in the internal market. “The concern is they won’t be able to trade AI systems because there’ll be different rules per member state,” said Veale.

Europe’s action to develop rules around AI comes with the aim of developing a “harmonized market” around the trading of AI systems. “That’s the fundamental logic of the AI Act, above all else,” Veale told me. 

Under the current draft of the Act, high-risk tools include AI used in education, employment or law enforcement. On high-risk AI, it has set requirements concerning design, labeling and documentation for any new piece of technology. On everything else — deemed non-high-risk systems — the Act forbids member states from regulating the systems at all. “That allows non-high-risk systems to move freely across the Union and be traded,” said Veale.

But Veale thinks that goal is naive. “When we say we trade AI systems, that ignores a lot of the practical reality around how business models of AI work,” he said. Nevertheless, it’s the underpinning principle of what we’re seeing. “It’s a legislative idea,” he said. “It’s not, ‘Let’s make the best human rights in the world.’ It’s, ‘Let’s remove barriers to trade for the technology industry.’”

The regulation does not establish an independent entity that will vet or evaluate these technologies — instead, companies will be expected to report on their activities in good faith. A quick look at Silicon Valley gives many people reason to believe this won’t cut it. Under the current draft, “you don’t even have to get a third-party private body to tick off your documentation,” said Veale. “You can just self-certify to a standard for the legislation, and pinky promise you did it correctly.”

Karolina Iwanska was equally worried about the certification requirements — particularly when it comes to tools in the high-risk category. The regulation will require providers to develop a risk management system and ensure their training data is relevant, representative and free of bias, an achilles heel for such tools. There’s now a decade of research on the topic, from Latanya Sweeney’s seminal 2013 study on racism in Google’s search algorithm  right up to the present day, when ChatGPT, the latest AI-powered chatbot, indulges in casual racism when opining about the value of different people’s lives based on their ethnicity.  AI tends to reflect our societies like a mirror. If it’s trained on our unjust reality, or an unrepresentative data sample, it will harm some people worse than others.

So far, experts worry that the regulation will not sufficiently acknowledge how complex these technologies are, and how difficult it can be to change them once they are up and running. “There is an assumption that you can fix the system,” said Iwanska, “but that ignores obligations on authorities that are actually going to be deploying those systems. There is no consideration of systemic biases, for example.” It’s one thing to prevent any biases being coded into the system or to ensure that systems are built using data that is representative of society and free of influence, but AI is always reflective of its creators — and that’s mostly affluent white men. 

Iwanska also says that drafters have offered little more than lip service to the real need for transparency or accountability around these tools. At present, the AI Act will require technology providers to include the intended purpose for their system, who the developer is, their contact details and their certificate number. But, “there’s nothing on the substance of how the system operates, what sort of criteria it uses, how it’s supposed to perform and so on. That’s a big fault that we feel will undermine public scrutiny of what sort of systems are developed,” she said.

The self-certification model borrows from other areas that Europe regulates, but few are as important to society as AI governance. Veale too was concerned about the pitfalls of this approach. “The rules are for fundamental rights around things like human oversight, or bias, or accuracy,” he said. “Not only are these things going to be self-certified by companies using this to try and lower the burdens on them, but they’re also going to be made up and elaborated in a completely closed-door, anti-democratic process that’s ongoing right now — even before the law is passed.”

Of course, the law is still being hashed out — it’s impossible to know for certain how it might change the way AI is used by public agencies. “The definitive answer will come in a couple of months, because the legislative process is still ongoing,” said Iwanska. She isn’t yet sure what impact the process will have. “[We] can expect that this proposal will change a lot,” she said. “But it’s not clear yet in which direction — so whether it will improve or be undermined.”

Alex Engler, a fellow in governance studies at the Brookings Institution, believes that where Europe leads, the world will follow. Because the European Union is a 450-million-strong market of consumers, and because it has in recent years managed to bring big tech partly to heel through its regulatory moves, he feels confident that the EU’s AI Act will shift how manufacturers of such systems operate worldwide. We’re already seeing a Europe-wide backlash against AI-powered surveillance systems that Engler expects will be bolstered by marketwide regulation from the EU. In fact, the European Data Protection Supervisor has welcomed plans to ban military-grade spyware of the type used to monitor politicians and journalists, as part of a proposed Media Freedom Act. And in November 2022, Italy’s data protection agency banned the use of facial recognition systems and other intrusive biometrics analysis until the end of 2023, or until laws covering its use are adopted, whichever comes sooner.

The EU’s legislation is part of a broader movement to try and draw boundaries around the development and use of AI systems. In the United States, the White House Office of Science and Technology has put forward a blueprint for an AI Bill of Rights following a year-long consultation with the public and experts, as well as industry. That followed the draft Algorithmic Accountability Act, which was tabled in Congress in March 2022. And in July 2022, plans for the American Data and Privacy Protection Act moved out of the committee stage with rare bipartisan support.

However, Americans shouldn’t hold their breath for anything to change soon, particularly with a new Congress convening this year. “In the U.S., you’re much less likely to see legislation,” said Engler. “There’s no evidence that anything like the Algorithmic Accountability Act is gaining momentum, and there’s a lot of skepticism around the Data and Privacy Protection Act,” he added.

In part, that’s because of the challenge of getting your arms around the morass of complications that legislating AI throws up. This is a global problem. “I don’t think you can write down a single set of rules that will apply to all algorithms,” said Engler. “Can we regulate AI? If you’re expecting a single law to come out that solves the problem, then no.” Yet he does think that governments can do better than they currently are, by adapting themselves holistically to emerging software in general. “That’s what we have to do — and in some ways that’s more daunting and less splashy, right?” Engler said. “It’s a whole-of-government change towards a deeper understanding of technology.”

Despite both the political and technological challenges that policymakers have had to grapple with in order to find consensus on the regulation, Dutch parliamentarian Van Sparrentak thinks the effort is worth it — not least because not acting allows AI’s use to grow unchecked. “What is most important is, when AI comes in place, people will never stand empty-handed anymore vis-à-vis a computer,” she said. “They’ll have an idea of why the system made a certain decision about their lives, and they’ll get transparency over that.”