Marseille’s fight against AI surveillance
The southern French city, once synonymous with urban crime, now encapsulates the spread of AI surveillance driven by Chinese companies
- Illustration by Gogi Kamushadze
In 2016, Netflix launched its first European production – a twisty political drama titled “Marseille.” Set in the historic port city, the series starred Gerard Depardieu and was supposed to be France’s answer to the hit U.S. TV show House of Cards. Instead ‘Marseille’ was widely panned for amplifying stereotypes about the city and reheating its former notoriety for crime, corruption and Kalashnikovs. One critic at the French newspaper Le Monde described the show as “une bouse” – or, in English, “cow shit.”
But beyond its exaggerated dialogue and theatrical sex scenes, the show illustrated the lure of surveillance for a city administration desperate to replace its reputation as the “French capital of crime” with a lucrative tourism industry. In the second season, the show’s deputy mayor suggests sacrificing Marseille’s arts budget for state-of-the-art CCTV cameras. “The city deserves to be safer,” she reasons.
The real-life deputy mayor, Caroline Pozmentier, shares a similar zeal for urban surveillance. She has advocated for Marseille to claim the title of Europe’s first “safe city” – a term tech companies use to describe cities using their products to reduce crime. “We start from the principle that without security, there is no economic and tourist development possible,” Pozmentier said in an 2016 interview, describing her new surveillance initiatives to Marseille’s La Provence newspaper.
Throughout her tenure, Marseille has been experimenting with “safe city” tools that integrate artificial intelligence (AI), citizen data and local government surveillance networks. Pozmentier declined multiple requests for comment on this story but a spokesperson for Marseille’s town hall confirmed the city is already using “predictive policing” technology that allows authorities to use big data to “anticipate” crimes likely to take place in the future. Last month, the city administration was in court, defending its right to roll-out “intelligent video surveillance” that will use AI to search through surveillance footage, automatically spot crimes and alert police officers to suspicious behavior.
The hearing, which took place at Marseille’s Administrative Tribunal, pointed to mounting local resistance to AI surveillance. “In court, we were alleging the installation of these illegal video surveillance systems creates a direct and serious interference with the right to privacy and freedom of expression for Marseille citizens,” says Félix Tréguer, founding member of La Quadrature du Net, the French digital rights group which brought the case to court alongside France’s League of Human Rights. The activists’ court appeal to stop Marseille’s video surveillance project was rejected.
A spokesperson for Marseille’s town hall confirmed to Coda Story that parts of the Intelligent Video Surveillance project are now in use. Investigators are already able to search through footage recorded by the city’s army of 2,000 surveillance cameras using “filters” that can detect people, vehicles, certain clothing colors or objects moving in specific directions or at certain speeds.
In the future the intelligent video project will also be able to alert police in real-time if the technology detects “abnormal behavior such as a person entering a prohibited area, a vehicle traveling in a pedestrian zone, a crowd in a place during the day or at a late hour”, the spokesperson added.
For Tréguer, the ability of this system to automatically detect human beings is an example of technology that outstrips the law and leaves citizens and their rights in a legal grey area. “We need a law that outlines what exactly the authorities are able to do and what are the safeguards,” he says.
Eyes everywhere in Marseilles
Marseille is a city charged with contradictions. At the Old Port, yachts float in neat lines on the Mediterranean’s still white water and tourists wheel their suitcases past a Ferris wheel. But just ten minutes inland is the city’s 3rd arrondissement, once labelled Europe’s poorest neighborhood. Over the past year, residents here have watched as surveillance cameras have been installed over cafes, at busy intersections, on quiet residential roads and in front of apartment blocks.
For most Marseille residents, the growing number of security cameras is the only evidence of a new, automated element in their city’s security apparatus. Surveillance using AI means authorities no longer have to rely on human beings to monitor the video streams. Instead, the role of watcher can be automated and the ability to monitor more cameras means even more can be installed.
“[These tools are] a way to have eyes everywhere,” says Steven Feldstein, fellow at the Carnegie Endowment think tank and author of the AI global surveillance index. In Feldstein’s 2019 index, he describes how a growing number of places are using AI surveillance tools to accomplish a range of policy objectives— “some lawful, others that violate human rights, and many of which fall into a murky middle ground.”
Félix Tréguer lives in the 3rd arrondissement. The activist shows me around, gesturing towards the cameras that have been positioned with clear views of residents’ front doors. As we walk, we pass graffiti etched into the neighborhood’s whitewashed stone. “La liberté meurt en toute sécurité”, it says – “Freedom dies in total security”.
Tréguer first heard about Marseille’s surveillance projects two years ago. He was surprised. “At the time, a lot of people were talking about social credit scoring in China. Or predictive policing in the U.S. But that seemed a bit removed from mainland Europe at the time,” he says.
Since then, the activist and researcher has launched Technopolice, a website designed to raise awareness about the spread of “safe city” projects across France. He has also been embroiled in legal battles, attempting to slow the country’s attempts to integrate new surveillance technology into local security set-ups. Last year, Tréguer and fellow activists successfully halted plans for two high schools in Marseille and Nice to experiment with facial recognition at the school gates.
The city appears to be establishing itself as a test-ground for a new generation of surveillance tools. Like the facial recognition project, Marseille’s “intelligent video surveillance” is referred to by the local administration as an experiment. “[These projects] are meant to provide money for these experiments which are a way for private companies to further develop their products through on-the-ground experimentations,” says Tréguer. “It’s a mutual learning process and a way to further develop these technologies, which are not yet ready for the market.”
For Tréguer, the money directed towards these projects could be better spent elsewhere. He notes the timing of this surveillance splurge; as homes around the city crumble because of neglect. In 2018, eight people were killed when two dilapidated houses in the 1st arrondissement collapsed. In the aftermath, more than 1,000 people were evacuated from buildings found to be unsafe, and some people are still waiting to be rehomed. “Security is not so much about inventing fancy new gadgets,” says Tréguer. “It’s also just about securing actual buildings so that they will not crumble down on people.”
China is a “major driver” of AI
Marseille’s surveillance projects are opaque. Information about how the new technology works leaks out only sporadically, through interviews and freedom of information requests. The spokesperson for Marseille’s town hall would not comment on the manufacturer behind the intelligent surveillance technology but documents obtained by La Quadrature du Net list the manufacturer as SNEF, a French company headquartered in Marseille.
Information obtained by the group through freedom of information requests also reveals how Marseille’s predictive policing project – with the dystopian title, the “Big Data of Public Tranquillity Project” – crunches data provided by the city’s police, fire stations and hospitals to anticipate where and when future crime might take place, using technology developed by French company, Engie Ineo.
Los Angeles uses a similar crime forecasting tool called PredPol which automatically analyses data about recent crimes’ type, time and location to provide officers with a daily list of “hotspots” where crime is most likely to happen so they can visit during their shifts. While supporters say predictive policing helps officers target their resources more efficiently, critics argue this is profiling by another name; disproportionately targeting neighborhoods with low-income residents and minority groups.
Marseilles-based journalist Rabha Attaf worries about the impact AI surveillance systems will have on populations in Marseille who have already been marginalized. “Any youth delinquency [here] is social delinquency because of racial segregation,” says Attaf, who also runs the human rights NGO Confluences. “There are no opportunities here so people choose another way. They don’t want to be criminals.” In a French cafe overlooking the port, she describes an atmosphere of mutual distrust between local authorities and residents with North African roots. “They sell [the idea] to the society that Muslims can become dangerous,” she says. “We are not completely citizens, we are half-citizens.”
Against this backdrop, Attaf is suspicious about who this technology will serve – the people or the police. She justifies her skepticism by referring to an incident in 2018, when during a protest a police tear-gas canister hit local resident Zineb Redouane in the face as she was closing the shutters inside her apartment. The 80-year-old woman was killed and the shooter was never publicly identified. But when lawyers representing the woman’s family suggested a nearby surveillance camera might provide answers, the local police quickly countered by saying it wasn’t working.
These fears are compounded by the involvement of controversial Chinese technology company ZTE in Marseille’s safe city apparatus. ZTE and its subsidiaries have played a significant role in building a surveillance system to monitor ethnic Uyghurs in China’s Xinjiang region. Human rights groups have described the technology being used there as violating privacy, freedom of expression, freedom of movement and the right to be presumed innocent until proven guilty. Last year, the U.S. commerce department added ZTE to a blacklist of Chinese tech companies and government bureaus, citing their use to suppress Chinese Uyghurs and other ethnic minorities, although trade tensions may also have been a factor.
The spokesperson for Marseille’s town hall confirmed the city had signed two contracts with ZTE in 2011 and 2013 to deploy video cameras. “Few cameras supplied by the contract holder were of the ZTE brand,” said the spokesperson. “Since then, the majority of these cameras have been renewed.” He did not confirm how many were still in use.
Marseille’s embrace of AI surveillance is ambitious, but by no means unique. Nearby Nice, for example, has also been experimenting with safe city technology including big data, facial recognition and even “emotion detection” on local trams.
“The south and the Paris region are the two parts of France where these developments are the strongest,” says Laurent Mucchielli, director of research at The French National Center for Scientific Research. The sociologist, who has performed extensive research on surveillance, sees a correlation between the presence of surveillance technology and local politics, adding that “these are regions where the conservatives are stronger.”
A September 2019 study by the Carnegie Endowment for International Peace found that AI surveillance is now in use in 75 countries worldwide. Feldstein, the report’s author, identified China as a “major driver” of the industry, with companies such as Huawei and ZTE by far the most prolific providers.
When the northern Argentinian province of Jujuy bought surveillance tech from ZTE last year, the local government said the $30 million contract – which included cameras, monitoring centers, emergency services, and telecommunications infrastructure – would help curb street crime.
But the spread of Chinese tech into Latin American prompted Washington to express “concerns”. A State Department spokesperson told Reuters their worries revolved around China’s capabilities to gather data while using that information to promote arbitrary surveillance and silence dissent around the globe, although they provided no evidence.
Marseille’s partnership with the company, however, went largely unnoticed. In a 2019 paper produced by the Paris Institute of Political Studies, researcher Alvaro Artigas wrote: “Local governments have become the most important partner for ZTE as they are the gatekeepers for contracting opportunities”. While federal or national programs attract a great deal of scrutiny, he added that local contracts escape that same level of attention, creating a backdoor into Europe for Chinese surveillance tech.
As unease spreads, it will be up to the country’s courts to decide if AI surveillance will continue to play a role in the future of French security. Or perhaps France will decide to side with Netflix’s mayor of Marseille, played by Benoît Magimel in season two. “Cameras and cops can’t solve everything,” he says.
The story you just read is a small piece of a complex and an ever-changing storyline we are following as part of our coverage. These overarching storylines — whether the disinformation campaigns that are feeding the war on truth or the new technologies strengthening the growing authoritarianism, are the crises that Coda covers relentlessly and with singular focus. But we can’t do it without your help. Support journalism that stays on the story.