The European Union is currently drafting a new omnibus framework — the first of its kind in the world — to regulate the use of artificial intelligence for border control. The Artificial Intelligence Act is an attempt to create a legal framework that tech companies and governments would have to adhere to when testing new AI-powered technologies along European borders.
Currently fraught with delays, deadlocks and difficulties, the AI Act has the potential to be as powerful as the EU’s landmark GDPR act, which regulates data protection in the European bloc. And there are many marginalized groups who could benefit from the new legislation or suffer disproportionately if certain amendments don’t make it through.
For migrants crossing Europe in search of a safer and more dignified life, the law could have huge implications. Currently, Europe’s borders are a highly digitized, unregulated gray zone for tech companies and border agencies to test the latest developments in surveillance technology and predictive algorithms.
Europe’s borders bristle with drones, tracking and predictive technologies designed to make efficient guesses at which routes migrants might take. AI-powered lie detectors are also being deployed on arriving migrants, along with a vast range of other technologies. The European border could be described as a “testing ground,” said Petra Molnar, Associate Director of the Refugee Law Lab at York University and fellow at Harvard’s Berkman Klein Centre. I spoke to her about what AI regulation could mean for people on the move — and for all of us.
This conversation has been edited for length and clarity.
So why is the AI Act relevant for migrants crossing Europe?
Globally speaking, there are very few laws right on the books that can actually be used to govern tech. And currently, the border is a particularly unregulated space, and it’s become a testing ground for a lot of things, including tech. So the AI Act — if we can push through certain amendments — is a really unique opportunity to try and think through how we can create oversight, accountability and governance on all sorts of technologies at the border.
This act touches on pretty much everything from toys to predictive policing and AI-powered lie detectors. We really want to get policymakers to think about whether the act goes far enough to regulate or even ban some of the most high-risk pieces of technology because currently, it really doesn’t. But unfortunately, we don’t have high hopes that the migration stuff is going to be taken up in the way that I think it should.
If you zoom out from the AI Act and you look at just the way that the EU has been positioning itself on migration, then you can see that securitization, surveillance, returns and deportations, importation of technology and facial recognition have all been really normalized. The EU doesn’t really have an incentive to regulate tech at the border, because it wants to test out certain things in that space and then potentially use them in other instances. And the same with the private sector.
It’s also important to remember that there are vast amounts of money floating around to fund these tech projects. There’s money to be made on border tech — so that disincentivizes regulation. At the moment, it’s a free-for-all. And in an unregulated space, there’s a lot of room for experimentation.
What do you mean by experimentation? What kind of things are being tested out in Europe right now that you would like to see the back of?
We are trying to get the European Union to think about banning, for example, predictive analytics used for border enforcement. It’s a tool to assist border guards with their operations to try to push back people on the move. The European Border agency Frontex has already signaled its willingness to develop predictive analytics for its own purposes.
So how does predictive border policing work?
It works by using AI to predict which route a group of people on the move might take to cross a border, so that border enforcement can decide, for example, whether to station a platoon in a certain place. It helps them with their operations and can lead to pushbacks, which can potentially lead to rights-infringing situations.
Can you explain what the difference is between border agencies using this kind of technology to try to predict where people are and just using their own brains?
So for the past few years, reports about pushbacks have been marred with allegations of human rights abuses. And we’re still having that baseline discussion and debate around the humanitarian side of pushbacks. But with predictive border analytics, it’s as if we’ve skipped a few steps in that discussion. Because this technology adds a layer of efficiency to basically make it easier for border agencies to meet their needs and their quotas.
So we haven’t even properly talked about the humanitarian implications of these violent pushbacks and already they’re using technology to ramp up their operations.
Right. There’s also very little transparency about what exactly is happening and what kind of tools are being used. There needs to be a complete rethink about why we’re even leaning on these tools in the first place.
Can you talk a bit more about how the border is sometimes considered a separate space — and why it could be exempt from things like the AI Act?
So often the border gets conflated with national security issues. The space is already opaque and discretionary, but as soon as you slap on that national security label, it becomes very difficult to access information about what’s really happening. Responsibility, oversight and accountability are all muddied in this space — and that gets worse when you add tech on top of that.
You’ve talked before about how the concept of the border is moving away from the physical frontier to much further afield — and even beginning to exist in our own bodies. Can you just explain that a bit?
There’s this idea of the “shifting border.” Sometimes people call it border externalization. It’s not anything super new: The U.S. has been doing it for a while. The basic idea is that it removes the physical border from its geographic location and pushes it further afield. Either kind of vertically up — like when you’re talking about aerial surveillance, the border is now in the sky. Or creating a surveillance dragnet that starts thousands of miles away from the actual border. For instance, the U.S. border actually starts in Central America when it comes to data sharing and surveillance. And the European Union is really leading the way in terms of externalizing its border into North and Sub-Saharan Africa. Niger, for example, gets a lot of money from the EU to do a lot of border enforcement. If you can prevent people from physically being on EU territory, where international human rights laws and refugee laws kick in, then half your work is done.
So basically someone is criminalized and marked out as a potential migrant before they’ve even tried to come to Europe?
Exactly. Predictive analytics and social media scraping tries to make predictions about who might be likely to move and whether they’re a risk. Like, ‘Oh, they happen to go to this particular mosque every Friday with their family, so let’s mark that as a potential red flag.’ So the border as a physical space just becomes a performance. Even our phones can become a border. You can be tracked in terms of how you’re interacting on Twitter or Facebook or TikTok. So we have to actually move away from these rigid understandings of what constitutes a border.